00:00:00.001 Started by upstream project "autotest-per-patch" build number 132846 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.046 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.047 The recommended git tool is: git 00:00:00.047 using credential 00000000-0000-0000-0000-000000000002 00:00:00.049 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.074 Fetching changes from the remote Git repository 00:00:00.076 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.119 Using shallow fetch with depth 1 00:00:00.119 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.119 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.174 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.211 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.211 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.844 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.855 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.867 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:03.867 > git config core.sparsecheckout # timeout=10 00:00:03.879 > git read-tree -mu HEAD # timeout=10 00:00:03.895 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.919 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.919 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.023 [Pipeline] Start of Pipeline 00:00:04.036 [Pipeline] library 00:00:04.037 Loading library shm_lib@master 00:00:04.038 Library shm_lib@master is cached. Copying from home. 00:00:04.052 [Pipeline] node 00:00:04.061 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:00:04.062 [Pipeline] { 00:00:04.071 [Pipeline] catchError 00:00:04.073 [Pipeline] { 00:00:04.084 [Pipeline] wrap 00:00:04.091 [Pipeline] { 00:00:04.098 [Pipeline] stage 00:00:04.100 [Pipeline] { (Prologue) 00:00:04.317 [Pipeline] sh 00:00:04.601 + logger -p user.info -t JENKINS-CI 00:00:04.619 [Pipeline] echo 00:00:04.621 Node: WFP8 00:00:04.632 [Pipeline] sh 00:00:04.932 [Pipeline] setCustomBuildProperty 00:00:04.944 [Pipeline] echo 00:00:04.945 Cleanup processes 00:00:04.951 [Pipeline] sh 00:00:05.236 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:05.236 2833503 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:05.246 [Pipeline] sh 00:00:05.525 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:05.525 ++ grep -v 'sudo pgrep' 00:00:05.525 ++ awk '{print $1}' 00:00:05.525 + sudo kill -9 00:00:05.525 + true 00:00:05.543 [Pipeline] cleanWs 00:00:05.554 [WS-CLEANUP] Deleting project workspace... 00:00:05.554 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.560 [WS-CLEANUP] done 00:00:05.564 [Pipeline] setCustomBuildProperty 00:00:05.577 [Pipeline] sh 00:00:05.853 + sudo git config --global --replace-all safe.directory '*' 00:00:05.944 [Pipeline] httpRequest 00:00:06.281 [Pipeline] echo 00:00:06.282 Sorcerer 10.211.164.20 is alive 00:00:06.290 [Pipeline] retry 00:00:06.293 [Pipeline] { 00:00:06.304 [Pipeline] httpRequest 00:00:06.308 HttpMethod: GET 00:00:06.312 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.328 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.328 Response Code: HTTP/1.1 200 OK 00:00:06.328 Success: Status code 200 is in the accepted range: 200,404 00:00:06.329 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.264 [Pipeline] } 00:00:07.279 [Pipeline] // retry 00:00:07.287 [Pipeline] sh 00:00:07.605 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.617 [Pipeline] httpRequest 00:00:08.278 [Pipeline] echo 00:00:08.280 Sorcerer 10.211.164.20 is alive 00:00:08.290 [Pipeline] retry 00:00:08.292 [Pipeline] { 00:00:08.304 [Pipeline] httpRequest 00:00:08.307 HttpMethod: GET 00:00:08.308 URL: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:08.308 Sending request to url: http://10.211.164.20/packages/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:00:08.331 Response Code: HTTP/1.1 200 OK 00:00:08.331 Success: Status code 200 is in the accepted range: 200,404 00:00:08.331 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:01:15.741 [Pipeline] } 00:01:15.759 [Pipeline] // retry 00:01:15.766 [Pipeline] sh 00:01:16.052 + tar --no-same-owner -xf spdk_4dfeb7f956ca2ea417b1882cf0e8ac23c1da93fd.tar.gz 00:01:18.599 [Pipeline] sh 00:01:18.882 + git -C spdk log --oneline -n5 00:01:18.882 4dfeb7f95 mk/spdk.common.mk Use pattern substitution instead of prefix removal 00:01:18.882 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:18.882 66289a6db build: use VERSION file for storing version 00:01:18.882 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:18.882 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:18.892 [Pipeline] } 00:01:18.905 [Pipeline] // stage 00:01:18.913 [Pipeline] stage 00:01:18.915 [Pipeline] { (Prepare) 00:01:18.931 [Pipeline] writeFile 00:01:18.945 [Pipeline] sh 00:01:19.229 + logger -p user.info -t JENKINS-CI 00:01:19.242 [Pipeline] sh 00:01:19.527 + logger -p user.info -t JENKINS-CI 00:01:19.538 [Pipeline] sh 00:01:19.823 + cat autorun-spdk.conf 00:01:19.823 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.823 SPDK_TEST_NVMF=1 00:01:19.823 SPDK_TEST_NVME_CLI=1 00:01:19.823 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.823 SPDK_TEST_NVMF_NICS=e810 00:01:19.823 SPDK_TEST_VFIOUSER=1 00:01:19.823 SPDK_RUN_UBSAN=1 00:01:19.823 NET_TYPE=phy 00:01:19.831 RUN_NIGHTLY=0 00:01:19.835 [Pipeline] readFile 00:01:19.859 [Pipeline] withEnv 00:01:19.861 [Pipeline] { 00:01:19.873 [Pipeline] sh 00:01:20.158 + set -ex 00:01:20.158 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf ]] 00:01:20.158 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:01:20.158 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.158 ++ SPDK_TEST_NVMF=1 00:01:20.158 ++ SPDK_TEST_NVME_CLI=1 00:01:20.158 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:20.158 ++ SPDK_TEST_NVMF_NICS=e810 00:01:20.158 ++ SPDK_TEST_VFIOUSER=1 00:01:20.158 ++ SPDK_RUN_UBSAN=1 00:01:20.158 ++ NET_TYPE=phy 00:01:20.158 ++ RUN_NIGHTLY=0 00:01:20.158 + case $SPDK_TEST_NVMF_NICS in 00:01:20.158 + DRIVERS=ice 00:01:20.158 + [[ tcp == \r\d\m\a ]] 00:01:20.158 + [[ -n ice ]] 00:01:20.158 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:20.158 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:20.158 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:20.158 rmmod: ERROR: Module irdma is not currently loaded 00:01:20.158 rmmod: ERROR: Module i40iw is not currently loaded 00:01:20.158 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:20.158 + true 00:01:20.158 + for D in $DRIVERS 00:01:20.158 + sudo modprobe ice 00:01:20.158 + exit 0 00:01:20.168 [Pipeline] } 00:01:20.182 [Pipeline] // withEnv 00:01:20.188 [Pipeline] } 00:01:20.201 [Pipeline] // stage 00:01:20.210 [Pipeline] catchError 00:01:20.212 [Pipeline] { 00:01:20.225 [Pipeline] timeout 00:01:20.226 Timeout set to expire in 1 hr 0 min 00:01:20.227 [Pipeline] { 00:01:20.241 [Pipeline] stage 00:01:20.242 [Pipeline] { (Tests) 00:01:20.255 [Pipeline] sh 00:01:20.539 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:01:20.539 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:01:20.539 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:01:20.539 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 ]] 00:01:20.539 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:01:20.539 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:01:20.539 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk ]] 00:01:20.539 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:01:20.539 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:01:20.539 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:01:20.539 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.539 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:01:20.539 + source /etc/os-release 00:01:20.539 ++ NAME='Fedora Linux' 00:01:20.539 ++ VERSION='39 (Cloud Edition)' 00:01:20.539 ++ ID=fedora 00:01:20.539 ++ VERSION_ID=39 00:01:20.539 ++ VERSION_CODENAME= 00:01:20.539 ++ PLATFORM_ID=platform:f39 00:01:20.539 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:20.539 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.539 ++ LOGO=fedora-logo-icon 00:01:20.539 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:20.539 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.539 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:20.539 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.539 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.539 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.539 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:20.539 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.539 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:20.539 ++ SUPPORT_END=2024-11-12 00:01:20.539 ++ VARIANT='Cloud Edition' 00:01:20.539 ++ VARIANT_ID=cloud 00:01:20.539 + uname -a 00:01:20.539 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:20.539 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:01:23.076 Hugepages 00:01:23.076 node hugesize free / total 00:01:23.076 node0 1048576kB 0 / 0 00:01:23.076 node0 2048kB 0 / 0 00:01:23.076 node1 1048576kB 0 / 0 00:01:23.076 node1 2048kB 0 / 0 00:01:23.076 00:01:23.076 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.076 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:23.076 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:23.076 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:23.076 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:23.076 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:23.076 + rm -f /tmp/spdk-ld-path 00:01:23.076 + source autorun-spdk.conf 00:01:23.076 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.076 ++ SPDK_TEST_NVMF=1 00:01:23.076 ++ SPDK_TEST_NVME_CLI=1 00:01:23.076 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.076 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.076 ++ SPDK_TEST_VFIOUSER=1 00:01:23.076 ++ SPDK_RUN_UBSAN=1 00:01:23.076 ++ NET_TYPE=phy 00:01:23.076 ++ RUN_NIGHTLY=0 00:01:23.076 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.076 + [[ -n '' ]] 00:01:23.076 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:01:23.076 + for M in /var/spdk/build-*-manifest.txt 00:01:23.076 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:23.076 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:01:23.076 + for M in /var/spdk/build-*-manifest.txt 00:01:23.076 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.076 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:01:23.076 + for M in /var/spdk/build-*-manifest.txt 00:01:23.076 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.076 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:01:23.076 ++ uname 00:01:23.336 + [[ Linux == \L\i\n\u\x ]] 00:01:23.336 + sudo dmesg -T 00:01:23.336 + sudo dmesg --clear 00:01:23.336 + dmesg_pid=2834946 00:01:23.336 + [[ Fedora Linux == FreeBSD ]] 00:01:23.336 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.336 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.336 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.336 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.336 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.336 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.336 + sudo dmesg -Tw 00:01:23.336 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\_\2\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.336 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.336 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.336 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.336 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.336 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.336 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.336 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.336 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:01:23.336 14:42:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.336 14:42:16 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:23.336 14:42:16 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:23.336 14:42:16 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:23.336 14:42:16 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:01:23.336 14:42:16 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:23.336 14:42:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:01:23.336 14:42:16 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:23.336 14:42:16 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.336 14:42:16 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.336 14:42:16 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.336 14:42:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.336 14:42:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.336 14:42:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.336 14:42:16 -- paths/export.sh@5 -- $ export PATH 00:01:23.336 14:42:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.336 14:42:16 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:01:23.336 14:42:16 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:23.336 14:42:16 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733924536.XXXXXX 00:01:23.336 14:42:16 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733924536.6FXcF0 00:01:23.336 14:42:16 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:23.336 14:42:16 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:23.336 14:42:16 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/' 00:01:23.336 14:42:16 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp' 00:01:23.336 14:42:16 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.336 14:42:16 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:23.336 14:42:16 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:23.336 14:42:16 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.336 14:42:16 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.336 14:42:16 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:23.336 14:42:16 -- pm/common@17 -- $ local monitor 00:01:23.336 14:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.336 14:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.336 14:42:16 -- pm/common@21 -- $ date +%s 00:01:23.336 14:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.336 14:42:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.336 14:42:16 -- pm/common@21 -- $ date +%s 00:01:23.336 14:42:16 -- pm/common@25 -- $ sleep 1 00:01:23.336 14:42:16 -- pm/common@21 -- $ date +%s 00:01:23.336 14:42:16 -- pm/common@21 -- $ date +%s 00:01:23.336 14:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.336 14:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.595 14:42:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.595 14:42:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733924536 00:01:23.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-cpu-load.pm.log 00:01:23.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-vmstat.pm.log 00:01:23.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-cpu-temp.pm.log 00:01:23.595 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733924536_collect-bmc-pm.bmc.pm.log 00:01:24.533 14:42:17 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:24.533 14:42:17 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.533 14:42:17 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.533 14:42:17 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:01:24.533 14:42:17 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.533 Wed Dec 11 01:42:17 PM UTC 2024 00:01:24.533 14:42:17 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.533 v25.01-rc1-1-g4dfeb7f95 00:01:24.533 14:42:17 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.533 14:42:17 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.533 14:42:17 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.533 14:42:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:24.533 14:42:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:24.533 14:42:17 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.533 ************************************ 00:01:24.533 START TEST ubsan 00:01:24.533 ************************************ 00:01:24.533 14:42:17 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:24.533 using ubsan 00:01:24.533 00:01:24.533 real 0m0.000s 00:01:24.533 user 0m0.000s 00:01:24.533 sys 0m0.000s 00:01:24.533 14:42:17 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:24.533 14:42:17 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.533 ************************************ 00:01:24.533 END TEST ubsan 00:01:24.533 ************************************ 00:01:24.533 14:42:17 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.533 14:42:17 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.533 14:42:17 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.533 14:42:17 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.792 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:01:24.792 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:01:25.051 Using 'verbs' RDMA provider 00:01:38.198 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal.log)...done. 00:01:50.409 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal-crypto.log)...done. 00:01:50.409 Creating mk/config.mk...done. 00:01:50.409 Creating mk/cc.flags.mk...done. 00:01:50.409 Type 'make' to build. 00:01:50.409 14:42:43 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:50.409 14:42:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:50.409 14:42:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:50.409 14:42:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.409 ************************************ 00:01:50.409 START TEST make 00:01:50.409 ************************************ 00:01:50.409 14:42:43 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:51.795 The Meson build system 00:01:51.795 Version: 1.5.0 00:01:51.795 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user 00:01:51.795 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:01:51.795 Build type: native build 00:01:51.795 Project name: libvfio-user 00:01:51.795 Project version: 0.0.1 00:01:51.795 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.795 C linker for the host machine: cc ld.bfd 2.40-14 00:01:51.795 Host machine cpu family: x86_64 00:01:51.795 Host machine cpu: x86_64 00:01:51.795 Run-time dependency threads found: YES 00:01:51.795 Library dl found: YES 00:01:51.795 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.795 Run-time dependency json-c found: YES 0.17 00:01:51.795 Run-time dependency cmocka found: YES 1.1.7 00:01:51.795 Program pytest-3 found: NO 00:01:51.795 Program flake8 found: NO 00:01:51.795 Program misspell-fixer found: NO 00:01:51.796 Program restructuredtext-lint found: NO 00:01:51.796 Program valgrind found: YES (/usr/bin/valgrind) 00:01:51.796 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.796 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.796 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.796 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.796 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-lspci.sh) 00:01:51.796 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-linkage.sh) 00:01:51.796 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:51.796 Build targets in project: 8 00:01:51.796 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:51.796 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:51.796 00:01:51.796 libvfio-user 0.0.1 00:01:51.796 00:01:51.796 User defined options 00:01:51.796 buildtype : debug 00:01:51.796 default_library: shared 00:01:51.796 libdir : /usr/local/lib 00:01:51.796 00:01:51.796 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:52.363 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:01:52.363 [1/37] Compiling C object samples/null.p/null.c.o 00:01:52.363 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:52.363 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:52.363 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:52.363 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:52.363 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:52.363 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:52.363 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:52.363 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:52.363 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:52.363 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:52.363 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:52.363 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:52.363 [14/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:52.363 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:52.363 [16/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:52.363 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:52.363 [18/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:52.363 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:52.363 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:52.363 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:52.363 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:52.363 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:52.619 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:52.619 [25/37] Compiling C object samples/server.p/server.c.o 00:01:52.619 [26/37] Compiling C object samples/client.p/client.c.o 00:01:52.619 [27/37] Linking target samples/client 00:01:52.619 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:52.619 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:52.619 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:52.619 [31/37] Linking target test/unit_tests 00:01:52.619 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:52.619 [33/37] Linking target samples/server 00:01:52.877 [34/37] Linking target samples/null 00:01:52.877 [35/37] Linking target samples/gpio-pci-idio-16 00:01:52.877 [36/37] Linking target samples/lspci 00:01:52.877 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:52.877 INFO: autodetecting backend as ninja 00:01:52.877 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:01:52.877 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:01:53.136 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:01:53.136 ninja: no work to do. 00:01:58.406 The Meson build system 00:01:58.406 Version: 1.5.0 00:01:58.406 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk 00:01:58.406 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp 00:01:58.406 Build type: native build 00:01:58.406 Program cat found: YES (/usr/bin/cat) 00:01:58.406 Project name: DPDK 00:01:58.406 Project version: 24.03.0 00:01:58.406 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:58.406 C linker for the host machine: cc ld.bfd 2.40-14 00:01:58.406 Host machine cpu family: x86_64 00:01:58.406 Host machine cpu: x86_64 00:01:58.406 Message: ## Building in Developer Mode ## 00:01:58.406 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:58.406 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/check-symbols.sh) 00:01:58.406 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:58.406 Program python3 found: YES (/usr/bin/python3) 00:01:58.406 Program cat found: YES (/usr/bin/cat) 00:01:58.407 Compiler for C supports arguments -march=native: YES 00:01:58.407 Checking for size of "void *" : 8 00:01:58.407 Checking for size of "void *" : 8 (cached) 00:01:58.407 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:58.407 Library m found: YES 00:01:58.407 Library numa found: YES 00:01:58.407 Has header "numaif.h" : YES 00:01:58.407 Library fdt found: NO 00:01:58.407 Library execinfo found: NO 00:01:58.407 Has header "execinfo.h" : YES 00:01:58.407 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:58.407 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:58.407 Run-time dependency openssl found: YES 3.1.1 00:01:58.407 Run-time dependency libpcap found: YES 1.10.4 00:01:58.407 Has header "pcap.h" with dependency libpcap: YES 00:01:58.407 Compiler for C supports arguments -Wcast-qual: YES 00:01:58.407 Compiler for C supports arguments -Wdeprecated: YES 00:01:58.407 Compiler for C supports arguments -Wformat: YES 00:01:58.407 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:58.407 Compiler for C supports arguments -Wformat-security: NO 00:01:58.407 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:58.407 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:58.407 Compiler for C supports arguments -Wnested-externs: YES 00:01:58.407 Compiler for C supports arguments -Wold-style-definition: YES 00:01:58.407 Compiler for C supports arguments -Wpointer-arith: YES 00:01:58.407 Compiler for C supports arguments -Wsign-compare: YES 00:01:58.407 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:58.407 Compiler for C supports arguments -Wundef: YES 00:01:58.407 Compiler for C supports arguments -Wwrite-strings: YES 00:01:58.407 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:58.407 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:58.407 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:58.407 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:58.407 Program objdump found: YES (/usr/bin/objdump) 00:01:58.407 Compiler for C supports arguments -mavx512f: YES 00:01:58.407 Checking if "AVX512 checking" compiles: YES 00:01:58.407 Fetching value of define "__SSE4_2__" : 1 00:01:58.407 Fetching value of define "__AES__" : 1 00:01:58.407 Fetching value of define "__AVX__" : 1 00:01:58.407 Fetching value of define "__AVX2__" : 1 00:01:58.407 Fetching value of define "__AVX512BW__" : 1 00:01:58.407 Fetching value of define "__AVX512CD__" : 1 00:01:58.407 Fetching value of define "__AVX512DQ__" : 1 00:01:58.407 Fetching value of define "__AVX512F__" : 1 00:01:58.407 Fetching value of define "__AVX512VL__" : 1 00:01:58.407 Fetching value of define "__PCLMUL__" : 1 00:01:58.407 Fetching value of define "__RDRND__" : 1 00:01:58.407 Fetching value of define "__RDSEED__" : 1 00:01:58.407 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:58.407 Fetching value of define "__znver1__" : (undefined) 00:01:58.407 Fetching value of define "__znver2__" : (undefined) 00:01:58.407 Fetching value of define "__znver3__" : (undefined) 00:01:58.407 Fetching value of define "__znver4__" : (undefined) 00:01:58.407 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:58.407 Message: lib/log: Defining dependency "log" 00:01:58.407 Message: lib/kvargs: Defining dependency "kvargs" 00:01:58.407 Message: lib/telemetry: Defining dependency "telemetry" 00:01:58.407 Checking for function "getentropy" : NO 00:01:58.407 Message: lib/eal: Defining dependency "eal" 00:01:58.407 Message: lib/ring: Defining dependency "ring" 00:01:58.407 Message: lib/rcu: Defining dependency "rcu" 00:01:58.407 Message: lib/mempool: Defining dependency "mempool" 00:01:58.407 Message: lib/mbuf: Defining dependency "mbuf" 00:01:58.407 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:58.407 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:58.407 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:58.407 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:58.407 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:58.407 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:58.407 Compiler for C supports arguments -mpclmul: YES 00:01:58.407 Compiler for C supports arguments -maes: YES 00:01:58.407 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:58.407 Compiler for C supports arguments -mavx512bw: YES 00:01:58.407 Compiler for C supports arguments -mavx512dq: YES 00:01:58.407 Compiler for C supports arguments -mavx512vl: YES 00:01:58.407 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:58.407 Compiler for C supports arguments -mavx2: YES 00:01:58.407 Compiler for C supports arguments -mavx: YES 00:01:58.407 Message: lib/net: Defining dependency "net" 00:01:58.407 Message: lib/meter: Defining dependency "meter" 00:01:58.407 Message: lib/ethdev: Defining dependency "ethdev" 00:01:58.407 Message: lib/pci: Defining dependency "pci" 00:01:58.407 Message: lib/cmdline: Defining dependency "cmdline" 00:01:58.407 Message: lib/hash: Defining dependency "hash" 00:01:58.407 Message: lib/timer: Defining dependency "timer" 00:01:58.407 Message: lib/compressdev: Defining dependency "compressdev" 00:01:58.407 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:58.407 Message: lib/dmadev: Defining dependency "dmadev" 00:01:58.407 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:58.407 Message: lib/power: Defining dependency "power" 00:01:58.407 Message: lib/reorder: Defining dependency "reorder" 00:01:58.407 Message: lib/security: Defining dependency "security" 00:01:58.407 Has header "linux/userfaultfd.h" : YES 00:01:58.407 Has header "linux/vduse.h" : YES 00:01:58.407 Message: lib/vhost: Defining dependency "vhost" 00:01:58.407 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:58.407 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:58.407 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:58.407 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:58.407 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:58.407 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:58.407 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:58.407 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:58.407 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:58.407 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:58.407 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:58.407 Configuring doxy-api-html.conf using configuration 00:01:58.407 Configuring doxy-api-man.conf using configuration 00:01:58.407 Program mandb found: YES (/usr/bin/mandb) 00:01:58.407 Program sphinx-build found: NO 00:01:58.407 Configuring rte_build_config.h using configuration 00:01:58.407 Message: 00:01:58.407 ================= 00:01:58.407 Applications Enabled 00:01:58.407 ================= 00:01:58.407 00:01:58.407 apps: 00:01:58.407 00:01:58.407 00:01:58.407 Message: 00:01:58.407 ================= 00:01:58.407 Libraries Enabled 00:01:58.407 ================= 00:01:58.407 00:01:58.407 libs: 00:01:58.407 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:58.407 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:58.407 cryptodev, dmadev, power, reorder, security, vhost, 00:01:58.407 00:01:58.407 Message: 00:01:58.407 =============== 00:01:58.407 Drivers Enabled 00:01:58.407 =============== 00:01:58.407 00:01:58.407 common: 00:01:58.407 00:01:58.407 bus: 00:01:58.407 pci, vdev, 00:01:58.407 mempool: 00:01:58.407 ring, 00:01:58.407 dma: 00:01:58.407 00:01:58.407 net: 00:01:58.407 00:01:58.407 crypto: 00:01:58.407 00:01:58.407 compress: 00:01:58.407 00:01:58.407 vdpa: 00:01:58.407 00:01:58.407 00:01:58.407 Message: 00:01:58.407 ================= 00:01:58.407 Content Skipped 00:01:58.407 ================= 00:01:58.407 00:01:58.407 apps: 00:01:58.407 dumpcap: explicitly disabled via build config 00:01:58.407 graph: explicitly disabled via build config 00:01:58.407 pdump: explicitly disabled via build config 00:01:58.407 proc-info: explicitly disabled via build config 00:01:58.407 test-acl: explicitly disabled via build config 00:01:58.407 test-bbdev: explicitly disabled via build config 00:01:58.407 test-cmdline: explicitly disabled via build config 00:01:58.407 test-compress-perf: explicitly disabled via build config 00:01:58.407 test-crypto-perf: explicitly disabled via build config 00:01:58.407 test-dma-perf: explicitly disabled via build config 00:01:58.407 test-eventdev: explicitly disabled via build config 00:01:58.407 test-fib: explicitly disabled via build config 00:01:58.407 test-flow-perf: explicitly disabled via build config 00:01:58.407 test-gpudev: explicitly disabled via build config 00:01:58.407 test-mldev: explicitly disabled via build config 00:01:58.407 test-pipeline: explicitly disabled via build config 00:01:58.407 test-pmd: explicitly disabled via build config 00:01:58.407 test-regex: explicitly disabled via build config 00:01:58.407 test-sad: explicitly disabled via build config 00:01:58.407 test-security-perf: explicitly disabled via build config 00:01:58.407 00:01:58.407 libs: 00:01:58.407 argparse: explicitly disabled via build config 00:01:58.407 metrics: explicitly disabled via build config 00:01:58.407 acl: explicitly disabled via build config 00:01:58.407 bbdev: explicitly disabled via build config 00:01:58.407 bitratestats: explicitly disabled via build config 00:01:58.407 bpf: explicitly disabled via build config 00:01:58.407 cfgfile: explicitly disabled via build config 00:01:58.407 distributor: explicitly disabled via build config 00:01:58.407 efd: explicitly disabled via build config 00:01:58.407 eventdev: explicitly disabled via build config 00:01:58.407 dispatcher: explicitly disabled via build config 00:01:58.407 gpudev: explicitly disabled via build config 00:01:58.408 gro: explicitly disabled via build config 00:01:58.408 gso: explicitly disabled via build config 00:01:58.408 ip_frag: explicitly disabled via build config 00:01:58.408 jobstats: explicitly disabled via build config 00:01:58.408 latencystats: explicitly disabled via build config 00:01:58.408 lpm: explicitly disabled via build config 00:01:58.408 member: explicitly disabled via build config 00:01:58.408 pcapng: explicitly disabled via build config 00:01:58.408 rawdev: explicitly disabled via build config 00:01:58.408 regexdev: explicitly disabled via build config 00:01:58.408 mldev: explicitly disabled via build config 00:01:58.408 rib: explicitly disabled via build config 00:01:58.408 sched: explicitly disabled via build config 00:01:58.408 stack: explicitly disabled via build config 00:01:58.408 ipsec: explicitly disabled via build config 00:01:58.408 pdcp: explicitly disabled via build config 00:01:58.408 fib: explicitly disabled via build config 00:01:58.408 port: explicitly disabled via build config 00:01:58.408 pdump: explicitly disabled via build config 00:01:58.408 table: explicitly disabled via build config 00:01:58.408 pipeline: explicitly disabled via build config 00:01:58.408 graph: explicitly disabled via build config 00:01:58.408 node: explicitly disabled via build config 00:01:58.408 00:01:58.408 drivers: 00:01:58.408 common/cpt: not in enabled drivers build config 00:01:58.408 common/dpaax: not in enabled drivers build config 00:01:58.408 common/iavf: not in enabled drivers build config 00:01:58.408 common/idpf: not in enabled drivers build config 00:01:58.408 common/ionic: not in enabled drivers build config 00:01:58.408 common/mvep: not in enabled drivers build config 00:01:58.408 common/octeontx: not in enabled drivers build config 00:01:58.408 bus/auxiliary: not in enabled drivers build config 00:01:58.408 bus/cdx: not in enabled drivers build config 00:01:58.408 bus/dpaa: not in enabled drivers build config 00:01:58.408 bus/fslmc: not in enabled drivers build config 00:01:58.408 bus/ifpga: not in enabled drivers build config 00:01:58.408 bus/platform: not in enabled drivers build config 00:01:58.408 bus/uacce: not in enabled drivers build config 00:01:58.408 bus/vmbus: not in enabled drivers build config 00:01:58.408 common/cnxk: not in enabled drivers build config 00:01:58.408 common/mlx5: not in enabled drivers build config 00:01:58.408 common/nfp: not in enabled drivers build config 00:01:58.408 common/nitrox: not in enabled drivers build config 00:01:58.408 common/qat: not in enabled drivers build config 00:01:58.408 common/sfc_efx: not in enabled drivers build config 00:01:58.408 mempool/bucket: not in enabled drivers build config 00:01:58.408 mempool/cnxk: not in enabled drivers build config 00:01:58.408 mempool/dpaa: not in enabled drivers build config 00:01:58.408 mempool/dpaa2: not in enabled drivers build config 00:01:58.408 mempool/octeontx: not in enabled drivers build config 00:01:58.408 mempool/stack: not in enabled drivers build config 00:01:58.408 dma/cnxk: not in enabled drivers build config 00:01:58.408 dma/dpaa: not in enabled drivers build config 00:01:58.408 dma/dpaa2: not in enabled drivers build config 00:01:58.408 dma/hisilicon: not in enabled drivers build config 00:01:58.408 dma/idxd: not in enabled drivers build config 00:01:58.408 dma/ioat: not in enabled drivers build config 00:01:58.408 dma/skeleton: not in enabled drivers build config 00:01:58.408 net/af_packet: not in enabled drivers build config 00:01:58.408 net/af_xdp: not in enabled drivers build config 00:01:58.408 net/ark: not in enabled drivers build config 00:01:58.408 net/atlantic: not in enabled drivers build config 00:01:58.408 net/avp: not in enabled drivers build config 00:01:58.408 net/axgbe: not in enabled drivers build config 00:01:58.408 net/bnx2x: not in enabled drivers build config 00:01:58.408 net/bnxt: not in enabled drivers build config 00:01:58.408 net/bonding: not in enabled drivers build config 00:01:58.408 net/cnxk: not in enabled drivers build config 00:01:58.408 net/cpfl: not in enabled drivers build config 00:01:58.408 net/cxgbe: not in enabled drivers build config 00:01:58.408 net/dpaa: not in enabled drivers build config 00:01:58.408 net/dpaa2: not in enabled drivers build config 00:01:58.408 net/e1000: not in enabled drivers build config 00:01:58.408 net/ena: not in enabled drivers build config 00:01:58.408 net/enetc: not in enabled drivers build config 00:01:58.408 net/enetfec: not in enabled drivers build config 00:01:58.408 net/enic: not in enabled drivers build config 00:01:58.408 net/failsafe: not in enabled drivers build config 00:01:58.408 net/fm10k: not in enabled drivers build config 00:01:58.408 net/gve: not in enabled drivers build config 00:01:58.408 net/hinic: not in enabled drivers build config 00:01:58.408 net/hns3: not in enabled drivers build config 00:01:58.408 net/i40e: not in enabled drivers build config 00:01:58.408 net/iavf: not in enabled drivers build config 00:01:58.408 net/ice: not in enabled drivers build config 00:01:58.408 net/idpf: not in enabled drivers build config 00:01:58.408 net/igc: not in enabled drivers build config 00:01:58.408 net/ionic: not in enabled drivers build config 00:01:58.408 net/ipn3ke: not in enabled drivers build config 00:01:58.408 net/ixgbe: not in enabled drivers build config 00:01:58.408 net/mana: not in enabled drivers build config 00:01:58.408 net/memif: not in enabled drivers build config 00:01:58.408 net/mlx4: not in enabled drivers build config 00:01:58.408 net/mlx5: not in enabled drivers build config 00:01:58.408 net/mvneta: not in enabled drivers build config 00:01:58.408 net/mvpp2: not in enabled drivers build config 00:01:58.408 net/netvsc: not in enabled drivers build config 00:01:58.408 net/nfb: not in enabled drivers build config 00:01:58.408 net/nfp: not in enabled drivers build config 00:01:58.408 net/ngbe: not in enabled drivers build config 00:01:58.408 net/null: not in enabled drivers build config 00:01:58.408 net/octeontx: not in enabled drivers build config 00:01:58.408 net/octeon_ep: not in enabled drivers build config 00:01:58.408 net/pcap: not in enabled drivers build config 00:01:58.408 net/pfe: not in enabled drivers build config 00:01:58.408 net/qede: not in enabled drivers build config 00:01:58.408 net/ring: not in enabled drivers build config 00:01:58.408 net/sfc: not in enabled drivers build config 00:01:58.408 net/softnic: not in enabled drivers build config 00:01:58.408 net/tap: not in enabled drivers build config 00:01:58.408 net/thunderx: not in enabled drivers build config 00:01:58.408 net/txgbe: not in enabled drivers build config 00:01:58.408 net/vdev_netvsc: not in enabled drivers build config 00:01:58.408 net/vhost: not in enabled drivers build config 00:01:58.408 net/virtio: not in enabled drivers build config 00:01:58.408 net/vmxnet3: not in enabled drivers build config 00:01:58.408 raw/*: missing internal dependency, "rawdev" 00:01:58.408 crypto/armv8: not in enabled drivers build config 00:01:58.408 crypto/bcmfs: not in enabled drivers build config 00:01:58.408 crypto/caam_jr: not in enabled drivers build config 00:01:58.408 crypto/ccp: not in enabled drivers build config 00:01:58.408 crypto/cnxk: not in enabled drivers build config 00:01:58.408 crypto/dpaa_sec: not in enabled drivers build config 00:01:58.408 crypto/dpaa2_sec: not in enabled drivers build config 00:01:58.408 crypto/ipsec_mb: not in enabled drivers build config 00:01:58.408 crypto/mlx5: not in enabled drivers build config 00:01:58.408 crypto/mvsam: not in enabled drivers build config 00:01:58.408 crypto/nitrox: not in enabled drivers build config 00:01:58.408 crypto/null: not in enabled drivers build config 00:01:58.408 crypto/octeontx: not in enabled drivers build config 00:01:58.408 crypto/openssl: not in enabled drivers build config 00:01:58.408 crypto/scheduler: not in enabled drivers build config 00:01:58.408 crypto/uadk: not in enabled drivers build config 00:01:58.408 crypto/virtio: not in enabled drivers build config 00:01:58.408 compress/isal: not in enabled drivers build config 00:01:58.408 compress/mlx5: not in enabled drivers build config 00:01:58.408 compress/nitrox: not in enabled drivers build config 00:01:58.408 compress/octeontx: not in enabled drivers build config 00:01:58.408 compress/zlib: not in enabled drivers build config 00:01:58.408 regex/*: missing internal dependency, "regexdev" 00:01:58.408 ml/*: missing internal dependency, "mldev" 00:01:58.408 vdpa/ifc: not in enabled drivers build config 00:01:58.408 vdpa/mlx5: not in enabled drivers build config 00:01:58.408 vdpa/nfp: not in enabled drivers build config 00:01:58.408 vdpa/sfc: not in enabled drivers build config 00:01:58.408 event/*: missing internal dependency, "eventdev" 00:01:58.408 baseband/*: missing internal dependency, "bbdev" 00:01:58.408 gpu/*: missing internal dependency, "gpudev" 00:01:58.408 00:01:58.408 00:01:58.976 Build targets in project: 85 00:01:58.976 00:01:58.976 DPDK 24.03.0 00:01:58.976 00:01:58.976 User defined options 00:01:58.976 buildtype : debug 00:01:58.976 default_library : shared 00:01:58.976 libdir : lib 00:01:58.976 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:01:58.976 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:58.976 c_link_args : 00:01:58.976 cpu_instruction_set: native 00:01:58.976 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:58.976 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:58.976 enable_docs : false 00:01:58.976 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:58.976 enable_kmods : false 00:01:58.976 max_lcores : 128 00:01:58.976 tests : false 00:01:58.976 00:01:58.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:59.245 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp' 00:01:59.245 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:59.245 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:59.510 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:59.510 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:59.511 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:59.511 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:59.511 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:59.511 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:59.511 [9/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:59.511 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:59.511 [11/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:59.511 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:59.511 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:59.511 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:59.511 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:59.511 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:59.511 [17/268] Linking static target lib/librte_kvargs.a 00:01:59.511 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:59.511 [19/268] Linking static target lib/librte_log.a 00:01:59.511 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:59.511 [21/268] Linking static target lib/librte_pci.a 00:01:59.511 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:59.770 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:59.770 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:59.770 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:59.770 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:59.770 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:59.770 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:59.770 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:59.770 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:59.771 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:59.771 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:59.771 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:59.771 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:59.771 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:59.771 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:59.771 [37/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:59.771 [38/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:59.771 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:59.771 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:59.771 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:00.029 [42/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:00.029 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:00.029 [44/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:00.029 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.029 [46/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:00.029 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:00.029 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.029 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.029 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:00.029 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:00.029 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:00.029 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.029 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:00.029 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:00.029 [56/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:00.029 [57/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:00.029 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:00.029 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:00.029 [60/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:00.029 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:00.029 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:00.029 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:00.029 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:00.029 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:00.029 [66/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.029 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:00.029 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:00.029 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:00.029 [70/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:00.029 [71/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:00.029 [72/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:00.029 [73/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:00.029 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:00.029 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:00.029 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:00.029 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:00.029 [78/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:00.029 [79/268] Linking static target lib/librte_telemetry.a 00:02:00.029 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:00.029 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:00.029 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:00.029 [83/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:00.029 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.029 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:00.029 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:00.029 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:00.029 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:00.029 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:00.029 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:00.029 [91/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.029 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:00.029 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:00.029 [94/268] Linking static target lib/librte_meter.a 00:02:00.029 [95/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:00.029 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:00.029 [97/268] Linking static target lib/librte_ring.a 00:02:00.029 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:00.029 [99/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:00.029 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:00.029 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:00.029 [102/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.029 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:00.029 [104/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:00.029 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:00.029 [106/268] Linking static target lib/librte_net.a 00:02:00.029 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:00.029 [108/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:00.029 [109/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:00.029 [110/268] Linking static target lib/librte_rcu.a 00:02:00.029 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:00.029 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:00.029 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:00.029 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:00.029 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:00.029 [116/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:00.029 [117/268] Linking static target lib/librte_mempool.a 00:02:00.029 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:00.029 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:00.029 [120/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.029 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:00.029 [122/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:00.029 [123/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:00.029 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:00.029 [125/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.289 [126/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:00.289 [127/268] Linking static target lib/librte_eal.a 00:02:00.289 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:00.289 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:00.289 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:00.289 [131/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:00.289 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:00.289 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [135/268] Linking target lib/librte_log.so.24.1 00:02:00.289 [136/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:00.289 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:00.289 [138/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:00.289 [139/268] Linking static target lib/librte_cmdline.a 00:02:00.289 [140/268] Linking static target lib/librte_mbuf.a 00:02:00.289 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:00.289 [142/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:00.289 [143/268] Linking static target lib/librte_timer.a 00:02:00.289 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:00.289 [146/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:00.289 [147/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:00.289 [148/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [149/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:00.289 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:00.289 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:00.289 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:00.289 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:00.289 [154/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:00.289 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:00.289 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.289 [157/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.289 [158/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:00.289 [159/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:00.289 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:00.548 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:00.548 [162/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:00.548 [163/268] Linking static target lib/librte_dmadev.a 00:02:00.548 [164/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:00.548 [165/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:00.548 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:00.548 [167/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:00.548 [168/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.548 [169/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:00.548 [170/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.548 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:00.548 [172/268] Linking target lib/librte_kvargs.so.24.1 00:02:00.548 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:00.548 [174/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:00.548 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:00.548 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:00.548 [177/268] Linking target lib/librte_telemetry.so.24.1 00:02:00.548 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:00.548 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:00.548 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:00.548 [181/268] Linking static target lib/librte_power.a 00:02:00.548 [182/268] Linking static target lib/librte_compressdev.a 00:02:00.548 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:00.548 [184/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:00.548 [185/268] Linking static target lib/librte_reorder.a 00:02:00.548 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:00.548 [187/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:00.548 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:00.548 [189/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:00.548 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:00.548 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:00.548 [192/268] Linking static target lib/librte_security.a 00:02:00.548 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:00.548 [194/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.548 [195/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:00.548 [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:00.548 [197/268] Linking static target drivers/librte_bus_vdev.a 00:02:00.548 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:00.807 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:00.807 [200/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.807 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:00.807 [202/268] Linking static target lib/librte_hash.a 00:02:00.807 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.807 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:00.807 [205/268] Linking static target lib/librte_cryptodev.a 00:02:00.807 [206/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.807 [207/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:00.807 [208/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:00.807 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.807 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:00.807 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.807 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:00.807 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:00.807 [214/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.065 [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.065 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.065 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.065 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:01.065 [219/268] Linking static target lib/librte_ethdev.a 00:02:01.065 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.323 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.323 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.323 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.323 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.323 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.582 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.582 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.519 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:02.519 [229/268] Linking static target lib/librte_vhost.a 00:02:02.777 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.680 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.951 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.210 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.210 [234/268] Linking target lib/librte_eal.so.24.1 00:02:10.469 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:10.469 [236/268] Linking target lib/librte_ring.so.24.1 00:02:10.469 [237/268] Linking target lib/librte_pci.so.24.1 00:02:10.469 [238/268] Linking target lib/librte_timer.so.24.1 00:02:10.469 [239/268] Linking target lib/librte_meter.so.24.1 00:02:10.469 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:10.469 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:10.469 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:10.469 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:10.469 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:10.469 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:10.469 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:10.469 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:10.469 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:10.469 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:10.728 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:10.728 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:10.728 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:10.728 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:10.986 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:10.986 [255/268] Linking target lib/librte_net.so.24.1 00:02:10.986 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:10.986 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:10.986 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:10.986 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:10.986 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:10.986 [261/268] Linking target lib/librte_hash.so.24.1 00:02:10.986 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:10.986 [263/268] Linking target lib/librte_security.so.24.1 00:02:10.986 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:11.244 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:11.244 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:11.244 [267/268] Linking target lib/librte_power.so.24.1 00:02:11.244 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:11.244 INFO: autodetecting backend as ninja 00:02:11.244 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp -j 96 00:02:23.464 CC lib/ut_mock/mock.o 00:02:23.464 CC lib/log/log.o 00:02:23.464 CC lib/log/log_flags.o 00:02:23.464 CC lib/log/log_deprecated.o 00:02:23.464 CC lib/ut/ut.o 00:02:23.464 LIB libspdk_ut.a 00:02:23.464 LIB libspdk_ut_mock.a 00:02:23.464 LIB libspdk_log.a 00:02:23.464 SO libspdk_ut.so.2.0 00:02:23.464 SO libspdk_ut_mock.so.6.0 00:02:23.464 SO libspdk_log.so.7.1 00:02:23.464 SYMLINK libspdk_ut.so 00:02:23.464 SYMLINK libspdk_ut_mock.so 00:02:23.464 SYMLINK libspdk_log.so 00:02:23.464 CC lib/dma/dma.o 00:02:23.464 CC lib/util/base64.o 00:02:23.464 CC lib/util/bit_array.o 00:02:23.464 CC lib/util/cpuset.o 00:02:23.464 CC lib/ioat/ioat.o 00:02:23.464 CXX lib/trace_parser/trace.o 00:02:23.464 CC lib/util/crc16.o 00:02:23.464 CC lib/util/crc32.o 00:02:23.464 CC lib/util/crc32c.o 00:02:23.464 CC lib/util/crc32_ieee.o 00:02:23.464 CC lib/util/crc64.o 00:02:23.464 CC lib/util/dif.o 00:02:23.464 CC lib/util/fd.o 00:02:23.464 CC lib/util/fd_group.o 00:02:23.464 CC lib/util/file.o 00:02:23.464 CC lib/util/hexlify.o 00:02:23.464 CC lib/util/iov.o 00:02:23.464 CC lib/util/math.o 00:02:23.464 CC lib/util/net.o 00:02:23.464 CC lib/util/pipe.o 00:02:23.464 CC lib/util/strerror_tls.o 00:02:23.464 CC lib/util/string.o 00:02:23.464 CC lib/util/uuid.o 00:02:23.464 CC lib/util/xor.o 00:02:23.464 CC lib/util/zipf.o 00:02:23.464 CC lib/util/md5.o 00:02:23.464 CC lib/vfio_user/host/vfio_user.o 00:02:23.464 CC lib/vfio_user/host/vfio_user_pci.o 00:02:23.464 LIB libspdk_dma.a 00:02:23.464 SO libspdk_dma.so.5.0 00:02:23.464 SYMLINK libspdk_dma.so 00:02:23.464 LIB libspdk_ioat.a 00:02:23.464 SO libspdk_ioat.so.7.0 00:02:23.464 SYMLINK libspdk_ioat.so 00:02:23.464 LIB libspdk_vfio_user.a 00:02:23.464 SO libspdk_vfio_user.so.5.0 00:02:23.464 SYMLINK libspdk_vfio_user.so 00:02:23.464 LIB libspdk_util.a 00:02:23.464 SO libspdk_util.so.10.1 00:02:23.464 SYMLINK libspdk_util.so 00:02:23.464 LIB libspdk_trace_parser.a 00:02:23.464 SO libspdk_trace_parser.so.6.0 00:02:23.464 SYMLINK libspdk_trace_parser.so 00:02:23.464 CC lib/idxd/idxd.o 00:02:23.464 CC lib/vmd/vmd.o 00:02:23.464 CC lib/json/json_parse.o 00:02:23.464 CC lib/conf/conf.o 00:02:23.464 CC lib/idxd/idxd_user.o 00:02:23.464 CC lib/vmd/led.o 00:02:23.464 CC lib/json/json_util.o 00:02:23.464 CC lib/idxd/idxd_kernel.o 00:02:23.464 CC lib/json/json_write.o 00:02:23.464 CC lib/rdma_utils/rdma_utils.o 00:02:23.464 CC lib/env_dpdk/env.o 00:02:23.464 CC lib/env_dpdk/memory.o 00:02:23.464 CC lib/env_dpdk/pci.o 00:02:23.464 CC lib/env_dpdk/init.o 00:02:23.464 CC lib/env_dpdk/threads.o 00:02:23.464 CC lib/env_dpdk/pci_ioat.o 00:02:23.464 CC lib/env_dpdk/pci_virtio.o 00:02:23.464 CC lib/env_dpdk/pci_vmd.o 00:02:23.464 CC lib/env_dpdk/pci_idxd.o 00:02:23.464 CC lib/env_dpdk/pci_event.o 00:02:23.464 CC lib/env_dpdk/sigbus_handler.o 00:02:23.464 CC lib/env_dpdk/pci_dpdk.o 00:02:23.464 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:23.464 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:23.464 LIB libspdk_conf.a 00:02:23.464 SO libspdk_conf.so.6.0 00:02:23.464 LIB libspdk_rdma_utils.a 00:02:23.464 LIB libspdk_json.a 00:02:23.464 SYMLINK libspdk_conf.so 00:02:23.464 SO libspdk_rdma_utils.so.1.0 00:02:23.464 SO libspdk_json.so.6.0 00:02:23.464 SYMLINK libspdk_rdma_utils.so 00:02:23.464 SYMLINK libspdk_json.so 00:02:23.464 LIB libspdk_idxd.a 00:02:23.726 SO libspdk_idxd.so.12.1 00:02:23.726 LIB libspdk_vmd.a 00:02:23.726 SYMLINK libspdk_idxd.so 00:02:23.726 SO libspdk_vmd.so.6.0 00:02:23.726 SYMLINK libspdk_vmd.so 00:02:23.726 CC lib/rdma_provider/common.o 00:02:23.726 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:23.726 CC lib/jsonrpc/jsonrpc_server.o 00:02:23.726 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:23.726 CC lib/jsonrpc/jsonrpc_client.o 00:02:23.990 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:23.990 LIB libspdk_rdma_provider.a 00:02:23.990 SO libspdk_rdma_provider.so.7.0 00:02:23.990 LIB libspdk_jsonrpc.a 00:02:24.273 SO libspdk_jsonrpc.so.6.0 00:02:24.273 SYMLINK libspdk_rdma_provider.so 00:02:24.273 LIB libspdk_env_dpdk.a 00:02:24.273 SYMLINK libspdk_jsonrpc.so 00:02:24.273 SO libspdk_env_dpdk.so.15.1 00:02:24.273 SYMLINK libspdk_env_dpdk.so 00:02:24.558 CC lib/rpc/rpc.o 00:02:24.817 LIB libspdk_rpc.a 00:02:24.817 SO libspdk_rpc.so.6.0 00:02:24.817 SYMLINK libspdk_rpc.so 00:02:25.076 CC lib/trace/trace.o 00:02:25.076 CC lib/trace/trace_flags.o 00:02:25.076 CC lib/notify/notify.o 00:02:25.076 CC lib/trace/trace_rpc.o 00:02:25.076 CC lib/notify/notify_rpc.o 00:02:25.076 CC lib/keyring/keyring.o 00:02:25.076 CC lib/keyring/keyring_rpc.o 00:02:25.334 LIB libspdk_notify.a 00:02:25.334 SO libspdk_notify.so.6.0 00:02:25.334 LIB libspdk_keyring.a 00:02:25.334 LIB libspdk_trace.a 00:02:25.334 SYMLINK libspdk_notify.so 00:02:25.334 SO libspdk_keyring.so.2.0 00:02:25.334 SO libspdk_trace.so.11.0 00:02:25.594 SYMLINK libspdk_keyring.so 00:02:25.594 SYMLINK libspdk_trace.so 00:02:25.853 CC lib/thread/thread.o 00:02:25.853 CC lib/thread/iobuf.o 00:02:25.853 CC lib/sock/sock.o 00:02:25.853 CC lib/sock/sock_rpc.o 00:02:26.111 LIB libspdk_sock.a 00:02:26.111 SO libspdk_sock.so.10.0 00:02:26.370 SYMLINK libspdk_sock.so 00:02:26.629 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:26.629 CC lib/nvme/nvme_ctrlr.o 00:02:26.629 CC lib/nvme/nvme_fabric.o 00:02:26.629 CC lib/nvme/nvme_ns_cmd.o 00:02:26.629 CC lib/nvme/nvme_ns.o 00:02:26.629 CC lib/nvme/nvme_pcie_common.o 00:02:26.629 CC lib/nvme/nvme_pcie.o 00:02:26.629 CC lib/nvme/nvme_qpair.o 00:02:26.629 CC lib/nvme/nvme.o 00:02:26.629 CC lib/nvme/nvme_quirks.o 00:02:26.629 CC lib/nvme/nvme_transport.o 00:02:26.629 CC lib/nvme/nvme_discovery.o 00:02:26.629 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:26.629 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:26.629 CC lib/nvme/nvme_tcp.o 00:02:26.629 CC lib/nvme/nvme_opal.o 00:02:26.629 CC lib/nvme/nvme_io_msg.o 00:02:26.629 CC lib/nvme/nvme_poll_group.o 00:02:26.629 CC lib/nvme/nvme_zns.o 00:02:26.629 CC lib/nvme/nvme_stubs.o 00:02:26.629 CC lib/nvme/nvme_auth.o 00:02:26.629 CC lib/nvme/nvme_cuse.o 00:02:26.629 CC lib/nvme/nvme_vfio_user.o 00:02:26.629 CC lib/nvme/nvme_rdma.o 00:02:26.887 LIB libspdk_thread.a 00:02:26.887 SO libspdk_thread.so.11.0 00:02:27.145 SYMLINK libspdk_thread.so 00:02:27.404 CC lib/fsdev/fsdev.o 00:02:27.404 CC lib/fsdev/fsdev_io.o 00:02:27.404 CC lib/fsdev/fsdev_rpc.o 00:02:27.404 CC lib/virtio/virtio.o 00:02:27.404 CC lib/vfu_tgt/tgt_endpoint.o 00:02:27.404 CC lib/vfu_tgt/tgt_rpc.o 00:02:27.404 CC lib/virtio/virtio_vhost_user.o 00:02:27.404 CC lib/virtio/virtio_vfio_user.o 00:02:27.404 CC lib/virtio/virtio_pci.o 00:02:27.404 CC lib/init/json_config.o 00:02:27.404 CC lib/init/subsystem_rpc.o 00:02:27.404 CC lib/init/subsystem.o 00:02:27.404 CC lib/init/rpc.o 00:02:27.404 CC lib/blob/blobstore.o 00:02:27.404 CC lib/blob/request.o 00:02:27.404 CC lib/blob/zeroes.o 00:02:27.404 CC lib/blob/blob_bs_dev.o 00:02:27.404 CC lib/accel/accel.o 00:02:27.404 CC lib/accel/accel_rpc.o 00:02:27.404 CC lib/accel/accel_sw.o 00:02:27.662 LIB libspdk_init.a 00:02:27.662 SO libspdk_init.so.6.0 00:02:27.662 LIB libspdk_vfu_tgt.a 00:02:27.662 LIB libspdk_virtio.a 00:02:27.662 SYMLINK libspdk_init.so 00:02:27.662 SO libspdk_vfu_tgt.so.3.0 00:02:27.662 SO libspdk_virtio.so.7.0 00:02:27.662 SYMLINK libspdk_vfu_tgt.so 00:02:27.662 SYMLINK libspdk_virtio.so 00:02:27.921 LIB libspdk_fsdev.a 00:02:27.921 SO libspdk_fsdev.so.2.0 00:02:27.921 SYMLINK libspdk_fsdev.so 00:02:27.921 CC lib/event/app.o 00:02:27.921 CC lib/event/reactor.o 00:02:27.921 CC lib/event/log_rpc.o 00:02:27.921 CC lib/event/app_rpc.o 00:02:27.921 CC lib/event/scheduler_static.o 00:02:28.180 LIB libspdk_accel.a 00:02:28.180 SO libspdk_accel.so.16.0 00:02:28.180 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:28.180 SYMLINK libspdk_accel.so 00:02:28.439 LIB libspdk_nvme.a 00:02:28.439 LIB libspdk_event.a 00:02:28.439 SO libspdk_event.so.14.0 00:02:28.439 SO libspdk_nvme.so.15.0 00:02:28.439 SYMLINK libspdk_event.so 00:02:28.698 SYMLINK libspdk_nvme.so 00:02:28.698 CC lib/bdev/bdev.o 00:02:28.698 CC lib/bdev/bdev_rpc.o 00:02:28.698 CC lib/bdev/bdev_zone.o 00:02:28.698 CC lib/bdev/part.o 00:02:28.698 CC lib/bdev/scsi_nvme.o 00:02:28.698 LIB libspdk_fuse_dispatcher.a 00:02:28.698 SO libspdk_fuse_dispatcher.so.1.0 00:02:28.956 SYMLINK libspdk_fuse_dispatcher.so 00:02:29.524 LIB libspdk_blob.a 00:02:29.524 SO libspdk_blob.so.12.0 00:02:29.524 SYMLINK libspdk_blob.so 00:02:30.092 CC lib/blobfs/blobfs.o 00:02:30.092 CC lib/lvol/lvol.o 00:02:30.092 CC lib/blobfs/tree.o 00:02:30.661 LIB libspdk_bdev.a 00:02:30.661 LIB libspdk_blobfs.a 00:02:30.661 SO libspdk_bdev.so.17.0 00:02:30.661 SO libspdk_blobfs.so.11.0 00:02:30.661 LIB libspdk_lvol.a 00:02:30.661 SYMLINK libspdk_bdev.so 00:02:30.661 SYMLINK libspdk_blobfs.so 00:02:30.661 SO libspdk_lvol.so.11.0 00:02:30.661 SYMLINK libspdk_lvol.so 00:02:30.921 CC lib/ublk/ublk.o 00:02:30.921 CC lib/ublk/ublk_rpc.o 00:02:30.921 CC lib/nvmf/ctrlr.o 00:02:30.921 CC lib/nbd/nbd.o 00:02:30.921 CC lib/nvmf/ctrlr_discovery.o 00:02:30.921 CC lib/nbd/nbd_rpc.o 00:02:30.921 CC lib/nvmf/ctrlr_bdev.o 00:02:30.921 CC lib/scsi/dev.o 00:02:30.921 CC lib/scsi/lun.o 00:02:30.921 CC lib/nvmf/subsystem.o 00:02:30.921 CC lib/nvmf/nvmf.o 00:02:30.921 CC lib/scsi/port.o 00:02:30.921 CC lib/nvmf/nvmf_rpc.o 00:02:30.921 CC lib/scsi/scsi.o 00:02:30.921 CC lib/nvmf/transport.o 00:02:30.921 CC lib/ftl/ftl_core.o 00:02:30.921 CC lib/scsi/scsi_bdev.o 00:02:30.921 CC lib/nvmf/tcp.o 00:02:30.921 CC lib/nvmf/stubs.o 00:02:30.921 CC lib/scsi/scsi_pr.o 00:02:30.921 CC lib/ftl/ftl_init.o 00:02:30.921 CC lib/scsi/scsi_rpc.o 00:02:30.921 CC lib/ftl/ftl_layout.o 00:02:30.921 CC lib/nvmf/mdns_server.o 00:02:30.921 CC lib/scsi/task.o 00:02:30.921 CC lib/nvmf/vfio_user.o 00:02:30.921 CC lib/ftl/ftl_debug.o 00:02:30.921 CC lib/nvmf/rdma.o 00:02:30.921 CC lib/ftl/ftl_io.o 00:02:30.921 CC lib/ftl/ftl_sb.o 00:02:30.921 CC lib/nvmf/auth.o 00:02:30.921 CC lib/ftl/ftl_l2p.o 00:02:30.921 CC lib/ftl/ftl_l2p_flat.o 00:02:30.921 CC lib/ftl/ftl_band.o 00:02:30.921 CC lib/ftl/ftl_nv_cache.o 00:02:30.921 CC lib/ftl/ftl_band_ops.o 00:02:30.921 CC lib/ftl/ftl_writer.o 00:02:30.921 CC lib/ftl/ftl_rq.o 00:02:30.921 CC lib/ftl/ftl_reloc.o 00:02:30.921 CC lib/ftl/ftl_l2p_cache.o 00:02:30.921 CC lib/ftl/ftl_p2l.o 00:02:31.179 CC lib/ftl/ftl_p2l_log.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:31.179 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:31.179 CC lib/ftl/utils/ftl_conf.o 00:02:31.179 CC lib/ftl/utils/ftl_md.o 00:02:31.179 CC lib/ftl/utils/ftl_bitmap.o 00:02:31.179 CC lib/ftl/utils/ftl_mempool.o 00:02:31.179 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:31.179 CC lib/ftl/utils/ftl_property.o 00:02:31.179 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:31.179 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:31.179 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:31.179 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:31.179 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:31.179 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:31.179 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:31.179 CC lib/ftl/base/ftl_base_dev.o 00:02:31.179 CC lib/ftl/base/ftl_base_bdev.o 00:02:31.179 CC lib/ftl/ftl_trace.o 00:02:31.437 LIB libspdk_nbd.a 00:02:31.437 SO libspdk_nbd.so.7.0 00:02:31.695 SYMLINK libspdk_nbd.so 00:02:31.695 LIB libspdk_scsi.a 00:02:31.695 SO libspdk_scsi.so.9.0 00:02:31.695 LIB libspdk_ublk.a 00:02:31.695 SYMLINK libspdk_scsi.so 00:02:31.695 SO libspdk_ublk.so.3.0 00:02:31.953 SYMLINK libspdk_ublk.so 00:02:32.211 LIB libspdk_ftl.a 00:02:32.211 CC lib/iscsi/conn.o 00:02:32.211 CC lib/iscsi/init_grp.o 00:02:32.211 CC lib/iscsi/iscsi.o 00:02:32.211 CC lib/iscsi/param.o 00:02:32.211 CC lib/vhost/vhost.o 00:02:32.211 CC lib/iscsi/tgt_node.o 00:02:32.211 CC lib/iscsi/portal_grp.o 00:02:32.211 CC lib/vhost/vhost_rpc.o 00:02:32.211 CC lib/vhost/vhost_scsi.o 00:02:32.211 CC lib/iscsi/iscsi_subsystem.o 00:02:32.211 CC lib/vhost/vhost_blk.o 00:02:32.211 CC lib/iscsi/iscsi_rpc.o 00:02:32.211 CC lib/vhost/rte_vhost_user.o 00:02:32.211 CC lib/iscsi/task.o 00:02:32.211 SO libspdk_ftl.so.9.0 00:02:32.469 SYMLINK libspdk_ftl.so 00:02:32.728 LIB libspdk_nvmf.a 00:02:32.728 SO libspdk_nvmf.so.20.0 00:02:32.987 LIB libspdk_vhost.a 00:02:32.987 SO libspdk_vhost.so.8.0 00:02:32.987 SYMLINK libspdk_nvmf.so 00:02:32.987 SYMLINK libspdk_vhost.so 00:02:33.247 LIB libspdk_iscsi.a 00:02:33.247 SO libspdk_iscsi.so.8.0 00:02:33.247 SYMLINK libspdk_iscsi.so 00:02:33.815 CC module/env_dpdk/env_dpdk_rpc.o 00:02:33.815 CC module/vfu_device/vfu_virtio.o 00:02:33.815 CC module/vfu_device/vfu_virtio_blk.o 00:02:33.815 CC module/vfu_device/vfu_virtio_scsi.o 00:02:33.815 CC module/vfu_device/vfu_virtio_rpc.o 00:02:33.815 CC module/vfu_device/vfu_virtio_fs.o 00:02:34.074 LIB libspdk_env_dpdk_rpc.a 00:02:34.074 CC module/fsdev/aio/fsdev_aio.o 00:02:34.074 CC module/keyring/file/keyring.o 00:02:34.074 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:34.074 CC module/fsdev/aio/linux_aio_mgr.o 00:02:34.074 CC module/keyring/file/keyring_rpc.o 00:02:34.074 CC module/accel/ioat/accel_ioat.o 00:02:34.074 CC module/accel/ioat/accel_ioat_rpc.o 00:02:34.074 CC module/scheduler/gscheduler/gscheduler.o 00:02:34.074 CC module/keyring/linux/keyring.o 00:02:34.074 CC module/accel/iaa/accel_iaa.o 00:02:34.074 CC module/keyring/linux/keyring_rpc.o 00:02:34.074 CC module/sock/posix/posix.o 00:02:34.074 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:34.074 CC module/accel/iaa/accel_iaa_rpc.o 00:02:34.074 CC module/blob/bdev/blob_bdev.o 00:02:34.074 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:34.074 CC module/accel/error/accel_error.o 00:02:34.074 CC module/accel/error/accel_error_rpc.o 00:02:34.074 SO libspdk_env_dpdk_rpc.so.6.0 00:02:34.074 CC module/accel/dsa/accel_dsa.o 00:02:34.074 CC module/accel/dsa/accel_dsa_rpc.o 00:02:34.074 SYMLINK libspdk_env_dpdk_rpc.so 00:02:34.074 LIB libspdk_keyring_linux.a 00:02:34.074 LIB libspdk_scheduler_gscheduler.a 00:02:34.074 LIB libspdk_keyring_file.a 00:02:34.333 SO libspdk_keyring_linux.so.1.0 00:02:34.333 LIB libspdk_scheduler_dpdk_governor.a 00:02:34.333 SO libspdk_scheduler_gscheduler.so.4.0 00:02:34.333 SO libspdk_keyring_file.so.2.0 00:02:34.333 LIB libspdk_accel_ioat.a 00:02:34.333 LIB libspdk_scheduler_dynamic.a 00:02:34.333 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:34.333 SYMLINK libspdk_keyring_linux.so 00:02:34.333 LIB libspdk_accel_iaa.a 00:02:34.333 LIB libspdk_accel_error.a 00:02:34.333 SO libspdk_accel_ioat.so.6.0 00:02:34.333 SYMLINK libspdk_scheduler_gscheduler.so 00:02:34.333 SO libspdk_scheduler_dynamic.so.4.0 00:02:34.333 SYMLINK libspdk_keyring_file.so 00:02:34.333 SO libspdk_accel_error.so.2.0 00:02:34.333 SO libspdk_accel_iaa.so.3.0 00:02:34.333 LIB libspdk_blob_bdev.a 00:02:34.333 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:34.333 LIB libspdk_accel_dsa.a 00:02:34.333 SYMLINK libspdk_accel_ioat.so 00:02:34.333 SYMLINK libspdk_scheduler_dynamic.so 00:02:34.333 SO libspdk_blob_bdev.so.12.0 00:02:34.333 SYMLINK libspdk_accel_iaa.so 00:02:34.333 SO libspdk_accel_dsa.so.5.0 00:02:34.333 SYMLINK libspdk_accel_error.so 00:02:34.333 LIB libspdk_vfu_device.a 00:02:34.333 SYMLINK libspdk_blob_bdev.so 00:02:34.333 SYMLINK libspdk_accel_dsa.so 00:02:34.333 SO libspdk_vfu_device.so.3.0 00:02:34.592 SYMLINK libspdk_vfu_device.so 00:02:34.592 LIB libspdk_fsdev_aio.a 00:02:34.592 SO libspdk_fsdev_aio.so.1.0 00:02:34.592 LIB libspdk_sock_posix.a 00:02:34.592 SO libspdk_sock_posix.so.6.0 00:02:34.592 SYMLINK libspdk_fsdev_aio.so 00:02:34.850 SYMLINK libspdk_sock_posix.so 00:02:34.850 CC module/blobfs/bdev/blobfs_bdev.o 00:02:34.850 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:34.850 CC module/bdev/delay/vbdev_delay.o 00:02:34.850 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:34.850 CC module/bdev/aio/bdev_aio.o 00:02:34.850 CC module/bdev/raid/bdev_raid.o 00:02:34.850 CC module/bdev/aio/bdev_aio_rpc.o 00:02:34.850 CC module/bdev/raid/bdev_raid_sb.o 00:02:34.850 CC module/bdev/raid/bdev_raid_rpc.o 00:02:34.850 CC module/bdev/lvol/vbdev_lvol.o 00:02:34.850 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:34.850 CC module/bdev/error/vbdev_error.o 00:02:34.850 CC module/bdev/raid/raid0.o 00:02:34.850 CC module/bdev/error/vbdev_error_rpc.o 00:02:34.850 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:34.850 CC module/bdev/raid/raid1.o 00:02:34.850 CC module/bdev/nvme/bdev_nvme.o 00:02:34.850 CC module/bdev/nvme/nvme_rpc.o 00:02:34.850 CC module/bdev/raid/concat.o 00:02:34.850 CC module/bdev/nvme/bdev_mdns_client.o 00:02:34.850 CC module/bdev/nvme/vbdev_opal.o 00:02:34.850 CC module/bdev/null/bdev_null_rpc.o 00:02:34.850 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:34.850 CC module/bdev/null/bdev_null.o 00:02:34.850 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:34.850 CC module/bdev/gpt/gpt.o 00:02:34.850 CC module/bdev/gpt/vbdev_gpt.o 00:02:34.850 CC module/bdev/malloc/bdev_malloc.o 00:02:34.850 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:34.850 CC module/bdev/iscsi/bdev_iscsi.o 00:02:34.850 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:34.850 CC module/bdev/split/vbdev_split.o 00:02:34.850 CC module/bdev/split/vbdev_split_rpc.o 00:02:34.850 CC module/bdev/passthru/vbdev_passthru.o 00:02:34.850 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:34.850 CC module/bdev/ftl/bdev_ftl.o 00:02:34.850 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:34.850 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:34.850 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:34.850 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:34.850 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:34.850 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:35.109 LIB libspdk_blobfs_bdev.a 00:02:35.109 SO libspdk_blobfs_bdev.so.6.0 00:02:35.109 LIB libspdk_bdev_split.a 00:02:35.109 LIB libspdk_bdev_null.a 00:02:35.109 LIB libspdk_bdev_error.a 00:02:35.109 SO libspdk_bdev_split.so.6.0 00:02:35.367 LIB libspdk_bdev_gpt.a 00:02:35.367 SYMLINK libspdk_blobfs_bdev.so 00:02:35.367 LIB libspdk_bdev_ftl.a 00:02:35.367 SO libspdk_bdev_gpt.so.6.0 00:02:35.367 SO libspdk_bdev_error.so.6.0 00:02:35.367 SO libspdk_bdev_null.so.6.0 00:02:35.367 SYMLINK libspdk_bdev_split.so 00:02:35.367 LIB libspdk_bdev_passthru.a 00:02:35.367 SO libspdk_bdev_ftl.so.6.0 00:02:35.367 LIB libspdk_bdev_delay.a 00:02:35.367 LIB libspdk_bdev_aio.a 00:02:35.367 LIB libspdk_bdev_iscsi.a 00:02:35.368 SO libspdk_bdev_passthru.so.6.0 00:02:35.368 LIB libspdk_bdev_malloc.a 00:02:35.368 LIB libspdk_bdev_zone_block.a 00:02:35.368 SYMLINK libspdk_bdev_gpt.so 00:02:35.368 SYMLINK libspdk_bdev_null.so 00:02:35.368 SYMLINK libspdk_bdev_error.so 00:02:35.368 SO libspdk_bdev_aio.so.6.0 00:02:35.368 SO libspdk_bdev_delay.so.6.0 00:02:35.368 SO libspdk_bdev_iscsi.so.6.0 00:02:35.368 SO libspdk_bdev_zone_block.so.6.0 00:02:35.368 SO libspdk_bdev_malloc.so.6.0 00:02:35.368 SYMLINK libspdk_bdev_ftl.so 00:02:35.368 SYMLINK libspdk_bdev_passthru.so 00:02:35.368 SYMLINK libspdk_bdev_delay.so 00:02:35.368 SYMLINK libspdk_bdev_aio.so 00:02:35.368 SYMLINK libspdk_bdev_iscsi.so 00:02:35.368 SYMLINK libspdk_bdev_zone_block.so 00:02:35.368 SYMLINK libspdk_bdev_malloc.so 00:02:35.368 LIB libspdk_bdev_lvol.a 00:02:35.368 LIB libspdk_bdev_virtio.a 00:02:35.627 SO libspdk_bdev_lvol.so.6.0 00:02:35.627 SO libspdk_bdev_virtio.so.6.0 00:02:35.627 SYMLINK libspdk_bdev_lvol.so 00:02:35.627 SYMLINK libspdk_bdev_virtio.so 00:02:35.886 LIB libspdk_bdev_raid.a 00:02:35.886 SO libspdk_bdev_raid.so.6.0 00:02:35.886 SYMLINK libspdk_bdev_raid.so 00:02:36.823 LIB libspdk_bdev_nvme.a 00:02:36.823 SO libspdk_bdev_nvme.so.7.1 00:02:37.083 SYMLINK libspdk_bdev_nvme.so 00:02:37.651 CC module/event/subsystems/sock/sock.o 00:02:37.651 CC module/event/subsystems/vmd/vmd.o 00:02:37.651 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:37.651 CC module/event/subsystems/iobuf/iobuf.o 00:02:37.651 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:37.651 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:37.651 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:37.651 CC module/event/subsystems/keyring/keyring.o 00:02:37.651 CC module/event/subsystems/scheduler/scheduler.o 00:02:37.651 CC module/event/subsystems/fsdev/fsdev.o 00:02:37.910 LIB libspdk_event_keyring.a 00:02:37.910 LIB libspdk_event_scheduler.a 00:02:37.910 LIB libspdk_event_vhost_blk.a 00:02:37.910 LIB libspdk_event_sock.a 00:02:37.910 LIB libspdk_event_iobuf.a 00:02:37.910 LIB libspdk_event_vmd.a 00:02:37.910 LIB libspdk_event_vfu_tgt.a 00:02:37.910 LIB libspdk_event_fsdev.a 00:02:37.910 SO libspdk_event_keyring.so.1.0 00:02:37.910 SO libspdk_event_sock.so.5.0 00:02:37.910 SO libspdk_event_scheduler.so.4.0 00:02:37.910 SO libspdk_event_vhost_blk.so.3.0 00:02:37.910 SO libspdk_event_iobuf.so.3.0 00:02:37.910 SO libspdk_event_vmd.so.6.0 00:02:37.910 SO libspdk_event_vfu_tgt.so.3.0 00:02:37.910 SO libspdk_event_fsdev.so.1.0 00:02:37.910 SYMLINK libspdk_event_sock.so 00:02:37.910 SYMLINK libspdk_event_keyring.so 00:02:37.910 SYMLINK libspdk_event_scheduler.so 00:02:37.910 SYMLINK libspdk_event_vhost_blk.so 00:02:37.910 SYMLINK libspdk_event_iobuf.so 00:02:37.910 SYMLINK libspdk_event_fsdev.so 00:02:37.910 SYMLINK libspdk_event_vfu_tgt.so 00:02:37.910 SYMLINK libspdk_event_vmd.so 00:02:38.169 CC module/event/subsystems/accel/accel.o 00:02:38.428 LIB libspdk_event_accel.a 00:02:38.428 SO libspdk_event_accel.so.6.0 00:02:38.428 SYMLINK libspdk_event_accel.so 00:02:38.997 CC module/event/subsystems/bdev/bdev.o 00:02:38.997 LIB libspdk_event_bdev.a 00:02:38.997 SO libspdk_event_bdev.so.6.0 00:02:38.997 SYMLINK libspdk_event_bdev.so 00:02:39.565 CC module/event/subsystems/scsi/scsi.o 00:02:39.565 CC module/event/subsystems/nbd/nbd.o 00:02:39.565 CC module/event/subsystems/ublk/ublk.o 00:02:39.565 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:39.565 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:39.565 LIB libspdk_event_nbd.a 00:02:39.565 LIB libspdk_event_scsi.a 00:02:39.565 LIB libspdk_event_ublk.a 00:02:39.565 SO libspdk_event_ublk.so.3.0 00:02:39.565 SO libspdk_event_nbd.so.6.0 00:02:39.565 SO libspdk_event_scsi.so.6.0 00:02:39.565 LIB libspdk_event_nvmf.a 00:02:39.565 SYMLINK libspdk_event_ublk.so 00:02:39.565 SYMLINK libspdk_event_scsi.so 00:02:39.565 SYMLINK libspdk_event_nbd.so 00:02:39.824 SO libspdk_event_nvmf.so.6.0 00:02:39.824 SYMLINK libspdk_event_nvmf.so 00:02:40.083 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:40.083 CC module/event/subsystems/iscsi/iscsi.o 00:02:40.083 LIB libspdk_event_vhost_scsi.a 00:02:40.083 LIB libspdk_event_iscsi.a 00:02:40.083 SO libspdk_event_vhost_scsi.so.3.0 00:02:40.342 SO libspdk_event_iscsi.so.6.0 00:02:40.342 SYMLINK libspdk_event_vhost_scsi.so 00:02:40.342 SYMLINK libspdk_event_iscsi.so 00:02:40.342 SO libspdk.so.6.0 00:02:40.342 SYMLINK libspdk.so 00:02:40.924 CC app/trace_record/trace_record.o 00:02:40.924 CC test/rpc_client/rpc_client_test.o 00:02:40.924 CXX app/trace/trace.o 00:02:40.924 TEST_HEADER include/spdk/accel.h 00:02:40.924 TEST_HEADER include/spdk/accel_module.h 00:02:40.924 TEST_HEADER include/spdk/assert.h 00:02:40.924 TEST_HEADER include/spdk/barrier.h 00:02:40.924 TEST_HEADER include/spdk/base64.h 00:02:40.924 CC app/spdk_lspci/spdk_lspci.o 00:02:40.924 TEST_HEADER include/spdk/bdev.h 00:02:40.924 TEST_HEADER include/spdk/bdev_module.h 00:02:40.924 TEST_HEADER include/spdk/bdev_zone.h 00:02:40.924 TEST_HEADER include/spdk/bit_array.h 00:02:40.924 TEST_HEADER include/spdk/bit_pool.h 00:02:40.924 CC app/spdk_nvme_perf/perf.o 00:02:40.924 TEST_HEADER include/spdk/blob_bdev.h 00:02:40.924 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:40.924 TEST_HEADER include/spdk/blobfs.h 00:02:40.924 CC app/spdk_top/spdk_top.o 00:02:40.924 TEST_HEADER include/spdk/blob.h 00:02:40.924 TEST_HEADER include/spdk/conf.h 00:02:40.924 CC app/spdk_nvme_discover/discovery_aer.o 00:02:40.924 TEST_HEADER include/spdk/config.h 00:02:40.924 TEST_HEADER include/spdk/cpuset.h 00:02:40.924 TEST_HEADER include/spdk/crc16.h 00:02:40.924 CC app/spdk_nvme_identify/identify.o 00:02:40.924 TEST_HEADER include/spdk/crc32.h 00:02:40.924 TEST_HEADER include/spdk/crc64.h 00:02:40.924 TEST_HEADER include/spdk/dif.h 00:02:40.924 TEST_HEADER include/spdk/dma.h 00:02:40.924 TEST_HEADER include/spdk/endian.h 00:02:40.924 TEST_HEADER include/spdk/env_dpdk.h 00:02:40.924 TEST_HEADER include/spdk/env.h 00:02:40.924 TEST_HEADER include/spdk/event.h 00:02:40.924 TEST_HEADER include/spdk/fd_group.h 00:02:40.924 TEST_HEADER include/spdk/fd.h 00:02:40.924 TEST_HEADER include/spdk/file.h 00:02:40.924 TEST_HEADER include/spdk/fsdev.h 00:02:40.924 TEST_HEADER include/spdk/fsdev_module.h 00:02:40.924 TEST_HEADER include/spdk/gpt_spec.h 00:02:40.924 TEST_HEADER include/spdk/ftl.h 00:02:40.924 TEST_HEADER include/spdk/hexlify.h 00:02:40.924 TEST_HEADER include/spdk/idxd.h 00:02:40.924 TEST_HEADER include/spdk/histogram_data.h 00:02:40.924 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:40.924 TEST_HEADER include/spdk/idxd_spec.h 00:02:40.924 TEST_HEADER include/spdk/init.h 00:02:40.924 TEST_HEADER include/spdk/ioat.h 00:02:40.924 CC app/spdk_dd/spdk_dd.o 00:02:40.924 TEST_HEADER include/spdk/ioat_spec.h 00:02:40.924 TEST_HEADER include/spdk/iscsi_spec.h 00:02:40.924 TEST_HEADER include/spdk/json.h 00:02:40.924 TEST_HEADER include/spdk/keyring.h 00:02:40.924 TEST_HEADER include/spdk/keyring_module.h 00:02:40.924 TEST_HEADER include/spdk/jsonrpc.h 00:02:40.924 TEST_HEADER include/spdk/likely.h 00:02:40.924 TEST_HEADER include/spdk/lvol.h 00:02:40.924 TEST_HEADER include/spdk/log.h 00:02:40.924 TEST_HEADER include/spdk/md5.h 00:02:40.924 TEST_HEADER include/spdk/nbd.h 00:02:40.924 TEST_HEADER include/spdk/mmio.h 00:02:40.924 TEST_HEADER include/spdk/memory.h 00:02:40.924 CC app/iscsi_tgt/iscsi_tgt.o 00:02:40.924 TEST_HEADER include/spdk/net.h 00:02:40.924 TEST_HEADER include/spdk/nvme.h 00:02:40.924 CC app/nvmf_tgt/nvmf_main.o 00:02:40.924 TEST_HEADER include/spdk/notify.h 00:02:40.924 TEST_HEADER include/spdk/nvme_intel.h 00:02:40.924 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:40.924 TEST_HEADER include/spdk/nvme_spec.h 00:02:40.924 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:40.924 TEST_HEADER include/spdk/nvme_zns.h 00:02:40.924 TEST_HEADER include/spdk/nvmf.h 00:02:40.924 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:40.924 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:40.924 TEST_HEADER include/spdk/nvmf_spec.h 00:02:40.924 TEST_HEADER include/spdk/opal.h 00:02:40.924 TEST_HEADER include/spdk/nvmf_transport.h 00:02:40.924 TEST_HEADER include/spdk/opal_spec.h 00:02:40.924 TEST_HEADER include/spdk/pci_ids.h 00:02:40.924 TEST_HEADER include/spdk/queue.h 00:02:40.924 TEST_HEADER include/spdk/pipe.h 00:02:40.924 TEST_HEADER include/spdk/rpc.h 00:02:40.924 TEST_HEADER include/spdk/reduce.h 00:02:40.924 TEST_HEADER include/spdk/scheduler.h 00:02:40.924 TEST_HEADER include/spdk/scsi.h 00:02:40.924 TEST_HEADER include/spdk/scsi_spec.h 00:02:40.924 TEST_HEADER include/spdk/stdinc.h 00:02:40.924 TEST_HEADER include/spdk/string.h 00:02:40.924 TEST_HEADER include/spdk/sock.h 00:02:40.924 TEST_HEADER include/spdk/trace.h 00:02:40.924 TEST_HEADER include/spdk/thread.h 00:02:40.924 TEST_HEADER include/spdk/trace_parser.h 00:02:40.924 TEST_HEADER include/spdk/tree.h 00:02:40.924 TEST_HEADER include/spdk/ublk.h 00:02:40.924 TEST_HEADER include/spdk/version.h 00:02:40.924 CC app/spdk_tgt/spdk_tgt.o 00:02:40.924 TEST_HEADER include/spdk/util.h 00:02:40.924 TEST_HEADER include/spdk/uuid.h 00:02:40.924 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:40.924 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:40.924 TEST_HEADER include/spdk/vmd.h 00:02:40.924 TEST_HEADER include/spdk/vhost.h 00:02:40.924 TEST_HEADER include/spdk/xor.h 00:02:40.924 TEST_HEADER include/spdk/zipf.h 00:02:40.924 CXX test/cpp_headers/accel.o 00:02:40.924 CXX test/cpp_headers/assert.o 00:02:40.924 CXX test/cpp_headers/accel_module.o 00:02:40.924 CXX test/cpp_headers/base64.o 00:02:40.924 CXX test/cpp_headers/bdev_module.o 00:02:40.924 CXX test/cpp_headers/barrier.o 00:02:40.924 CXX test/cpp_headers/bdev.o 00:02:40.924 CXX test/cpp_headers/bdev_zone.o 00:02:40.924 CXX test/cpp_headers/bit_pool.o 00:02:40.924 CXX test/cpp_headers/bit_array.o 00:02:40.924 CXX test/cpp_headers/blob_bdev.o 00:02:40.924 CXX test/cpp_headers/blobfs_bdev.o 00:02:40.924 CXX test/cpp_headers/blob.o 00:02:40.924 CXX test/cpp_headers/conf.o 00:02:40.924 CXX test/cpp_headers/blobfs.o 00:02:40.924 CXX test/cpp_headers/config.o 00:02:40.924 CXX test/cpp_headers/crc16.o 00:02:40.924 CXX test/cpp_headers/cpuset.o 00:02:40.924 CXX test/cpp_headers/crc64.o 00:02:40.924 CXX test/cpp_headers/crc32.o 00:02:40.924 CXX test/cpp_headers/dif.o 00:02:40.924 CXX test/cpp_headers/endian.o 00:02:40.924 CXX test/cpp_headers/dma.o 00:02:40.924 CXX test/cpp_headers/env_dpdk.o 00:02:40.924 CXX test/cpp_headers/env.o 00:02:40.924 CXX test/cpp_headers/event.o 00:02:40.924 CXX test/cpp_headers/fd_group.o 00:02:40.924 CXX test/cpp_headers/fd.o 00:02:40.924 CXX test/cpp_headers/file.o 00:02:40.924 CXX test/cpp_headers/fsdev.o 00:02:40.924 CXX test/cpp_headers/fsdev_module.o 00:02:40.924 CXX test/cpp_headers/ftl.o 00:02:40.924 CXX test/cpp_headers/gpt_spec.o 00:02:40.924 CXX test/cpp_headers/idxd.o 00:02:40.924 CXX test/cpp_headers/hexlify.o 00:02:40.924 CXX test/cpp_headers/idxd_spec.o 00:02:40.924 CXX test/cpp_headers/histogram_data.o 00:02:40.924 CXX test/cpp_headers/init.o 00:02:40.924 CXX test/cpp_headers/ioat_spec.o 00:02:40.924 CXX test/cpp_headers/ioat.o 00:02:40.924 CXX test/cpp_headers/jsonrpc.o 00:02:40.924 CXX test/cpp_headers/json.o 00:02:40.924 CXX test/cpp_headers/iscsi_spec.o 00:02:40.924 CXX test/cpp_headers/keyring.o 00:02:40.924 CXX test/cpp_headers/keyring_module.o 00:02:40.924 CXX test/cpp_headers/likely.o 00:02:40.924 CXX test/cpp_headers/log.o 00:02:40.924 CXX test/cpp_headers/lvol.o 00:02:40.924 CXX test/cpp_headers/md5.o 00:02:40.924 CXX test/cpp_headers/memory.o 00:02:40.924 CXX test/cpp_headers/mmio.o 00:02:40.924 CXX test/cpp_headers/nbd.o 00:02:40.924 CXX test/cpp_headers/net.o 00:02:40.924 CXX test/cpp_headers/nvme.o 00:02:40.924 CXX test/cpp_headers/notify.o 00:02:40.924 CXX test/cpp_headers/nvme_intel.o 00:02:40.924 CXX test/cpp_headers/nvme_ocssd.o 00:02:40.924 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:40.924 CXX test/cpp_headers/nvme_spec.o 00:02:40.924 CXX test/cpp_headers/nvme_zns.o 00:02:40.924 CXX test/cpp_headers/nvmf_cmd.o 00:02:40.924 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:40.924 CXX test/cpp_headers/nvmf.o 00:02:40.924 CXX test/cpp_headers/nvmf_spec.o 00:02:40.924 CXX test/cpp_headers/nvmf_transport.o 00:02:40.924 CXX test/cpp_headers/opal.o 00:02:40.924 CXX test/cpp_headers/opal_spec.o 00:02:41.202 CC test/thread/poller_perf/poller_perf.o 00:02:41.202 CXX test/cpp_headers/pci_ids.o 00:02:41.202 CC examples/ioat/verify/verify.o 00:02:41.202 CC test/env/pci/pci_ut.o 00:02:41.202 CC examples/util/zipf/zipf.o 00:02:41.202 CC app/fio/nvme/fio_plugin.o 00:02:41.202 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:41.202 CC test/env/vtophys/vtophys.o 00:02:41.202 CC test/env/memory/memory_ut.o 00:02:41.202 CC test/app/jsoncat/jsoncat.o 00:02:41.202 CC app/fio/bdev/fio_plugin.o 00:02:41.202 CC examples/ioat/perf/perf.o 00:02:41.202 CC test/dma/test_dma/test_dma.o 00:02:41.202 CC test/app/stub/stub.o 00:02:41.202 CC test/app/histogram_perf/histogram_perf.o 00:02:41.202 CC test/app/bdev_svc/bdev_svc.o 00:02:41.202 LINK rpc_client_test 00:02:41.202 LINK spdk_lspci 00:02:41.470 LINK spdk_trace_record 00:02:41.470 LINK nvmf_tgt 00:02:41.470 CC test/env/mem_callbacks/mem_callbacks.o 00:02:41.470 LINK interrupt_tgt 00:02:41.470 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:41.470 LINK spdk_nvme_discover 00:02:41.470 LINK poller_perf 00:02:41.470 CXX test/cpp_headers/pipe.o 00:02:41.470 CXX test/cpp_headers/reduce.o 00:02:41.470 CXX test/cpp_headers/queue.o 00:02:41.470 CXX test/cpp_headers/rpc.o 00:02:41.470 LINK jsoncat 00:02:41.470 CXX test/cpp_headers/scheduler.o 00:02:41.470 LINK env_dpdk_post_init 00:02:41.730 CXX test/cpp_headers/scsi_spec.o 00:02:41.730 CXX test/cpp_headers/scsi.o 00:02:41.730 CXX test/cpp_headers/sock.o 00:02:41.730 LINK iscsi_tgt 00:02:41.730 CXX test/cpp_headers/stdinc.o 00:02:41.730 CXX test/cpp_headers/string.o 00:02:41.730 CXX test/cpp_headers/thread.o 00:02:41.730 CXX test/cpp_headers/trace.o 00:02:41.730 CXX test/cpp_headers/trace_parser.o 00:02:41.730 CXX test/cpp_headers/tree.o 00:02:41.730 CXX test/cpp_headers/ublk.o 00:02:41.730 CXX test/cpp_headers/util.o 00:02:41.730 CXX test/cpp_headers/uuid.o 00:02:41.730 CXX test/cpp_headers/version.o 00:02:41.730 CXX test/cpp_headers/vfio_user_spec.o 00:02:41.730 CXX test/cpp_headers/vfio_user_pci.o 00:02:41.730 CXX test/cpp_headers/vhost.o 00:02:41.730 CXX test/cpp_headers/xor.o 00:02:41.730 CXX test/cpp_headers/vmd.o 00:02:41.730 CXX test/cpp_headers/zipf.o 00:02:41.730 LINK verify 00:02:41.730 LINK spdk_dd 00:02:41.730 LINK vtophys 00:02:41.730 LINK spdk_tgt 00:02:41.730 LINK zipf 00:02:41.730 LINK ioat_perf 00:02:41.730 LINK histogram_perf 00:02:41.730 LINK bdev_svc 00:02:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:41.730 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:41.730 LINK stub 00:02:41.730 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:41.988 LINK spdk_trace 00:02:41.988 LINK pci_ut 00:02:41.988 LINK spdk_nvme 00:02:41.988 LINK spdk_bdev 00:02:41.988 LINK test_dma 00:02:41.988 CC test/event/reactor_perf/reactor_perf.o 00:02:41.988 CC test/event/reactor/reactor.o 00:02:41.988 CC test/event/event_perf/event_perf.o 00:02:41.988 CC test/event/app_repeat/app_repeat.o 00:02:42.248 CC test/event/scheduler/scheduler.o 00:02:42.248 CC examples/idxd/perf/perf.o 00:02:42.249 CC examples/vmd/lsvmd/lsvmd.o 00:02:42.249 CC examples/vmd/led/led.o 00:02:42.249 LINK spdk_nvme_perf 00:02:42.249 CC examples/sock/hello_world/hello_sock.o 00:02:42.249 LINK nvme_fuzz 00:02:42.249 LINK vhost_fuzz 00:02:42.249 CC examples/thread/thread/thread_ex.o 00:02:42.249 LINK spdk_nvme_identify 00:02:42.249 LINK reactor_perf 00:02:42.249 LINK mem_callbacks 00:02:42.249 LINK reactor 00:02:42.249 LINK event_perf 00:02:42.249 LINK spdk_top 00:02:42.249 CC app/vhost/vhost.o 00:02:42.249 LINK app_repeat 00:02:42.249 LINK lsvmd 00:02:42.249 LINK led 00:02:42.249 LINK scheduler 00:02:42.506 LINK hello_sock 00:02:42.506 LINK idxd_perf 00:02:42.506 LINK thread 00:02:42.506 LINK vhost 00:02:42.506 CC test/nvme/aer/aer.o 00:02:42.506 CC test/nvme/sgl/sgl.o 00:02:42.506 CC test/nvme/e2edp/nvme_dp.o 00:02:42.506 CC test/nvme/overhead/overhead.o 00:02:42.506 CC test/nvme/connect_stress/connect_stress.o 00:02:42.506 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:42.506 CC test/nvme/startup/startup.o 00:02:42.506 CC test/nvme/err_injection/err_injection.o 00:02:42.506 CC test/nvme/simple_copy/simple_copy.o 00:02:42.506 CC test/nvme/reset/reset.o 00:02:42.506 CC test/nvme/fdp/fdp.o 00:02:42.506 CC test/nvme/reserve/reserve.o 00:02:42.506 CC test/nvme/cuse/cuse.o 00:02:42.506 CC test/nvme/fused_ordering/fused_ordering.o 00:02:42.506 CC test/nvme/boot_partition/boot_partition.o 00:02:42.506 CC test/nvme/compliance/nvme_compliance.o 00:02:42.506 CC test/blobfs/mkfs/mkfs.o 00:02:42.506 CC test/accel/dif/dif.o 00:02:42.764 LINK memory_ut 00:02:42.764 CC test/lvol/esnap/esnap.o 00:02:42.764 LINK connect_stress 00:02:42.764 LINK err_injection 00:02:42.764 LINK startup 00:02:42.764 LINK doorbell_aers 00:02:42.764 LINK boot_partition 00:02:42.764 LINK reserve 00:02:42.764 LINK fused_ordering 00:02:42.764 LINK simple_copy 00:02:42.764 LINK mkfs 00:02:42.764 LINK reset 00:02:42.764 LINK sgl 00:02:42.764 LINK nvme_dp 00:02:42.764 LINK aer 00:02:42.764 LINK overhead 00:02:42.764 CC examples/nvme/hotplug/hotplug.o 00:02:42.764 CC examples/nvme/abort/abort.o 00:02:42.764 CC examples/nvme/reconnect/reconnect.o 00:02:42.764 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:42.764 CC examples/nvme/arbitration/arbitration.o 00:02:42.764 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:42.764 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:42.764 CC examples/nvme/hello_world/hello_world.o 00:02:42.764 LINK fdp 00:02:43.022 LINK nvme_compliance 00:02:43.022 CC examples/accel/perf/accel_perf.o 00:02:43.022 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:43.022 CC examples/blob/cli/blobcli.o 00:02:43.022 CC examples/blob/hello_world/hello_blob.o 00:02:43.022 LINK pmr_persistence 00:02:43.022 LINK cmb_copy 00:02:43.022 LINK hello_world 00:02:43.022 LINK hotplug 00:02:43.022 LINK iscsi_fuzz 00:02:43.281 LINK abort 00:02:43.281 LINK reconnect 00:02:43.281 LINK arbitration 00:02:43.281 LINK dif 00:02:43.281 LINK hello_blob 00:02:43.281 LINK hello_fsdev 00:02:43.281 LINK nvme_manage 00:02:43.281 LINK accel_perf 00:02:43.540 LINK blobcli 00:02:43.540 LINK cuse 00:02:43.798 CC test/bdev/bdevio/bdevio.o 00:02:43.798 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.798 CC examples/bdev/bdevperf/bdevperf.o 00:02:44.057 LINK bdevio 00:02:44.057 LINK hello_bdev 00:02:44.624 LINK bdevperf 00:02:45.191 CC examples/nvmf/nvmf/nvmf.o 00:02:45.191 LINK nvmf 00:02:46.571 LINK esnap 00:02:46.571 00:02:46.571 real 0m56.457s 00:02:46.571 user 8m2.424s 00:02:46.571 sys 3m43.366s 00:02:46.571 14:43:39 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.571 14:43:39 make -- common/autotest_common.sh@10 -- $ set +x 00:02:46.571 ************************************ 00:02:46.571 END TEST make 00:02:46.571 ************************************ 00:02:46.571 14:43:39 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:46.571 14:43:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:46.571 14:43:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:46.571 14:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.571 14:43:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-load.pid ]] 00:02:46.571 14:43:39 -- pm/common@44 -- $ pid=2834990 00:02:46.571 14:43:39 -- pm/common@50 -- $ kill -TERM 2834990 00:02:46.571 14:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.571 14:43:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-vmstat.pid ]] 00:02:46.571 14:43:39 -- pm/common@44 -- $ pid=2834992 00:02:46.571 14:43:39 -- pm/common@50 -- $ kill -TERM 2834992 00:02:46.571 14:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.571 14:43:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:46.571 14:43:39 -- pm/common@44 -- $ pid=2834994 00:02:46.571 14:43:39 -- pm/common@50 -- $ kill -TERM 2834994 00:02:46.571 14:43:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.571 14:43:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:46.571 14:43:39 -- pm/common@44 -- $ pid=2835017 00:02:46.571 14:43:39 -- pm/common@50 -- $ sudo -E kill -TERM 2835017 00:02:46.571 14:43:39 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:46.571 14:43:39 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:46.831 14:43:39 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:46.831 14:43:39 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:46.831 14:43:39 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:46.831 14:43:39 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:46.831 14:43:39 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:46.831 14:43:39 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:46.831 14:43:39 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:46.831 14:43:39 -- scripts/common.sh@336 -- # IFS=.-: 00:02:46.831 14:43:39 -- scripts/common.sh@336 -- # read -ra ver1 00:02:46.831 14:43:39 -- scripts/common.sh@337 -- # IFS=.-: 00:02:46.831 14:43:39 -- scripts/common.sh@337 -- # read -ra ver2 00:02:46.831 14:43:39 -- scripts/common.sh@338 -- # local 'op=<' 00:02:46.831 14:43:39 -- scripts/common.sh@340 -- # ver1_l=2 00:02:46.831 14:43:39 -- scripts/common.sh@341 -- # ver2_l=1 00:02:46.831 14:43:39 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:46.831 14:43:39 -- scripts/common.sh@344 -- # case "$op" in 00:02:46.831 14:43:39 -- scripts/common.sh@345 -- # : 1 00:02:46.831 14:43:39 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:46.831 14:43:39 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.831 14:43:39 -- scripts/common.sh@365 -- # decimal 1 00:02:46.831 14:43:39 -- scripts/common.sh@353 -- # local d=1 00:02:46.831 14:43:39 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:46.831 14:43:39 -- scripts/common.sh@355 -- # echo 1 00:02:46.831 14:43:39 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:46.831 14:43:39 -- scripts/common.sh@366 -- # decimal 2 00:02:46.831 14:43:39 -- scripts/common.sh@353 -- # local d=2 00:02:46.831 14:43:39 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:46.831 14:43:39 -- scripts/common.sh@355 -- # echo 2 00:02:46.831 14:43:39 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:46.831 14:43:39 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:46.831 14:43:39 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:46.831 14:43:39 -- scripts/common.sh@368 -- # return 0 00:02:46.831 14:43:39 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:46.831 14:43:39 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:46.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.831 --rc genhtml_branch_coverage=1 00:02:46.831 --rc genhtml_function_coverage=1 00:02:46.831 --rc genhtml_legend=1 00:02:46.831 --rc geninfo_all_blocks=1 00:02:46.831 --rc geninfo_unexecuted_blocks=1 00:02:46.831 00:02:46.831 ' 00:02:46.831 14:43:39 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:46.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.831 --rc genhtml_branch_coverage=1 00:02:46.831 --rc genhtml_function_coverage=1 00:02:46.831 --rc genhtml_legend=1 00:02:46.831 --rc geninfo_all_blocks=1 00:02:46.831 --rc geninfo_unexecuted_blocks=1 00:02:46.831 00:02:46.831 ' 00:02:46.831 14:43:39 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:46.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.831 --rc genhtml_branch_coverage=1 00:02:46.831 --rc genhtml_function_coverage=1 00:02:46.831 --rc genhtml_legend=1 00:02:46.831 --rc geninfo_all_blocks=1 00:02:46.831 --rc geninfo_unexecuted_blocks=1 00:02:46.831 00:02:46.831 ' 00:02:46.831 14:43:39 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:46.831 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:46.831 --rc genhtml_branch_coverage=1 00:02:46.831 --rc genhtml_function_coverage=1 00:02:46.831 --rc genhtml_legend=1 00:02:46.831 --rc geninfo_all_blocks=1 00:02:46.831 --rc geninfo_unexecuted_blocks=1 00:02:46.831 00:02:46.831 ' 00:02:46.831 14:43:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:02:46.831 14:43:39 -- nvmf/common.sh@7 -- # uname -s 00:02:46.831 14:43:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:46.831 14:43:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:46.831 14:43:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:46.831 14:43:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:46.831 14:43:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:46.831 14:43:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:46.831 14:43:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:46.831 14:43:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:46.831 14:43:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:46.831 14:43:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:46.831 14:43:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:46.831 14:43:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:46.831 14:43:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:46.831 14:43:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:46.831 14:43:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:46.831 14:43:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:46.831 14:43:39 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:02:46.831 14:43:39 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:46.831 14:43:39 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:46.831 14:43:39 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:46.831 14:43:39 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:46.831 14:43:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.832 14:43:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.832 14:43:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.832 14:43:39 -- paths/export.sh@5 -- # export PATH 00:02:46.832 14:43:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:46.832 14:43:39 -- nvmf/common.sh@51 -- # : 0 00:02:46.832 14:43:39 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:46.832 14:43:39 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:46.832 14:43:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:46.832 14:43:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:46.832 14:43:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:46.832 14:43:39 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:46.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:46.832 14:43:39 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:46.832 14:43:39 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:46.832 14:43:39 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:46.832 14:43:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:46.832 14:43:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:46.832 14:43:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:46.832 14:43:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:46.832 14:43:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:02:46.832 14:43:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/core-collector.sh %P %s %t' 00:02:46.832 14:43:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:02:46.832 14:43:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:46.832 14:43:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:46.832 14:43:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:46.832 14:43:39 -- spdk/autotest.sh@48 -- # udevadm_pid=2898694 00:02:46.832 14:43:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:46.832 14:43:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:46.832 14:43:39 -- pm/common@17 -- # local monitor 00:02:46.832 14:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.832 14:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.832 14:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.832 14:43:39 -- pm/common@21 -- # date +%s 00:02:46.832 14:43:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:46.832 14:43:39 -- pm/common@21 -- # date +%s 00:02:46.832 14:43:39 -- pm/common@25 -- # sleep 1 00:02:46.832 14:43:39 -- pm/common@21 -- # date +%s 00:02:46.832 14:43:39 -- pm/common@21 -- # date +%s 00:02:46.832 14:43:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733924619 00:02:46.832 14:43:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733924619 00:02:46.832 14:43:39 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733924619 00:02:46.832 14:43:39 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733924619 00:02:46.832 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733924619_collect-cpu-load.pm.log 00:02:46.832 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733924619_collect-vmstat.pm.log 00:02:46.832 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733924619_collect-cpu-temp.pm.log 00:02:46.832 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733924619_collect-bmc-pm.bmc.pm.log 00:02:47.769 14:43:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:47.769 14:43:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:47.769 14:43:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:47.769 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:02:47.769 14:43:40 -- spdk/autotest.sh@59 -- # create_test_list 00:02:47.769 14:43:40 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:47.769 14:43:40 -- common/autotest_common.sh@10 -- # set +x 00:02:48.028 14:43:40 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh 00:02:48.028 14:43:40 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:48.028 14:43:40 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:48.028 14:43:40 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:02:48.028 14:43:40 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:48.028 14:43:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:48.028 14:43:40 -- common/autotest_common.sh@1457 -- # uname 00:02:48.028 14:43:40 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:48.028 14:43:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:48.028 14:43:40 -- common/autotest_common.sh@1477 -- # uname 00:02:48.028 14:43:40 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:48.028 14:43:40 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:48.028 14:43:40 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:48.028 lcov: LCOV version 1.15 00:02:48.028 14:43:40 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info 00:03:00.234 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:00.234 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno 00:03:15.117 14:44:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:15.117 14:44:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:15.117 14:44:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.117 14:44:06 -- spdk/autotest.sh@78 -- # rm -f 00:03:15.117 14:44:06 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:03:16.055 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:16.055 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:16.055 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:16.056 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:16.056 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:16.056 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:16.315 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:16.575 14:44:09 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:16.575 14:44:09 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:16.575 14:44:09 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:16.575 14:44:09 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:16.575 14:44:09 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:16.575 14:44:09 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:16.575 14:44:09 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:16.575 14:44:09 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:03:16.575 14:44:09 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:16.575 14:44:09 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:16.575 14:44:09 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:16.575 14:44:09 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:16.575 14:44:09 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:16.575 14:44:09 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:16.575 14:44:09 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:16.575 14:44:09 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:16.575 14:44:09 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:16.575 14:44:09 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:16.575 14:44:09 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:16.575 No valid GPT data, bailing 00:03:16.575 14:44:09 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:16.575 14:44:09 -- scripts/common.sh@394 -- # pt= 00:03:16.575 14:44:09 -- scripts/common.sh@395 -- # return 1 00:03:16.575 14:44:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:16.575 1+0 records in 00:03:16.575 1+0 records out 00:03:16.575 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423607 s, 248 MB/s 00:03:16.575 14:44:09 -- spdk/autotest.sh@105 -- # sync 00:03:16.575 14:44:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:16.575 14:44:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:16.575 14:44:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:23.146 14:44:14 -- spdk/autotest.sh@111 -- # uname -s 00:03:23.146 14:44:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:23.146 14:44:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:23.146 14:44:14 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:03:25.053 Hugepages 00:03:25.053 node hugesize free / total 00:03:25.053 node0 1048576kB 0 / 0 00:03:25.053 node0 2048kB 0 / 0 00:03:25.053 node1 1048576kB 0 / 0 00:03:25.053 node1 2048kB 0 / 0 00:03:25.053 00:03:25.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.053 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:25.053 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:25.053 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:25.053 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:25.053 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:25.053 14:44:17 -- spdk/autotest.sh@117 -- # uname -s 00:03:25.053 14:44:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:25.053 14:44:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:25.053 14:44:17 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:03:28.418 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:28.418 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:28.988 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:28.988 14:44:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:29.925 14:44:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:29.925 14:44:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:29.925 14:44:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:29.925 14:44:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:29.925 14:44:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:29.925 14:44:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:29.925 14:44:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:29.925 14:44:22 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:03:29.925 14:44:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:29.925 14:44:22 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:29.925 14:44:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:29.925 14:44:22 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:03:33.213 Waiting for block devices as requested 00:03:33.213 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:33.213 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:33.213 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:33.213 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:33.213 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:33.213 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:33.213 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:33.473 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:33.473 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:33.473 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:33.732 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:33.732 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:33.732 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:33.732 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:33.991 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:33.991 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:33.991 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:34.251 14:44:27 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:34.251 14:44:27 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:34.251 14:44:27 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:34.251 14:44:27 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:34.251 14:44:27 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:34.251 14:44:27 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:34.251 14:44:27 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:34.251 14:44:27 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:34.251 14:44:27 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:34.251 14:44:27 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:34.251 14:44:27 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:34.251 14:44:27 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:34.251 14:44:27 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:34.251 14:44:27 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:34.251 14:44:27 -- common/autotest_common.sh@1543 -- # continue 00:03:34.251 14:44:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:34.251 14:44:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:34.251 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:03:34.251 14:44:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:34.251 14:44:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:34.251 14:44:27 -- common/autotest_common.sh@10 -- # set +x 00:03:34.251 14:44:27 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:03:37.543 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:37.543 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:38.111 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.111 14:44:31 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:38.111 14:44:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:38.111 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:03:38.111 14:44:31 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:38.111 14:44:31 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:38.111 14:44:31 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.111 14:44:31 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:38.111 14:44:31 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:38.111 14:44:31 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:38.111 14:44:31 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:38.111 14:44:31 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:38.111 14:44:31 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.111 14:44:31 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.111 14:44:31 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.111 14:44:31 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:03:38.111 14:44:31 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.370 14:44:31 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.370 14:44:31 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:38.370 14:44:31 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:38.370 14:44:31 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:38.370 14:44:31 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:38.370 14:44:31 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:38.370 14:44:31 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:38.370 14:44:31 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:38.370 14:44:31 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:38.370 14:44:31 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:38.370 14:44:31 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2913113 00:03:38.370 14:44:31 -- common/autotest_common.sh@1585 -- # waitforlisten 2913113 00:03:38.370 14:44:31 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:03:38.370 14:44:31 -- common/autotest_common.sh@835 -- # '[' -z 2913113 ']' 00:03:38.370 14:44:31 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.370 14:44:31 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:38.370 14:44:31 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.370 14:44:31 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:38.370 14:44:31 -- common/autotest_common.sh@10 -- # set +x 00:03:38.370 [2024-12-11 14:44:31.281135] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:03:38.370 [2024-12-11 14:44:31.281203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2913113 ] 00:03:38.370 [2024-12-11 14:44:31.357576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.370 [2024-12-11 14:44:31.399453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.629 14:44:31 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:38.629 14:44:31 -- common/autotest_common.sh@868 -- # return 0 00:03:38.629 14:44:31 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:38.629 14:44:31 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:38.629 14:44:31 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:41.919 nvme0n1 00:03:41.919 14:44:34 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:41.919 [2024-12-11 14:44:34.807985] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:41.919 request: 00:03:41.919 { 00:03:41.919 "nvme_ctrlr_name": "nvme0", 00:03:41.919 "password": "test", 00:03:41.919 "method": "bdev_nvme_opal_revert", 00:03:41.919 "req_id": 1 00:03:41.919 } 00:03:41.919 Got JSON-RPC error response 00:03:41.919 response: 00:03:41.919 { 00:03:41.919 "code": -32602, 00:03:41.919 "message": "Invalid parameters" 00:03:41.919 } 00:03:41.919 14:44:34 -- common/autotest_common.sh@1591 -- # true 00:03:41.919 14:44:34 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:41.919 14:44:34 -- common/autotest_common.sh@1595 -- # killprocess 2913113 00:03:41.919 14:44:34 -- common/autotest_common.sh@954 -- # '[' -z 2913113 ']' 00:03:41.919 14:44:34 -- common/autotest_common.sh@958 -- # kill -0 2913113 00:03:41.919 14:44:34 -- common/autotest_common.sh@959 -- # uname 00:03:41.919 14:44:34 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:41.919 14:44:34 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2913113 00:03:41.919 14:44:34 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:41.919 14:44:34 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:41.919 14:44:34 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2913113' 00:03:41.919 killing process with pid 2913113 00:03:41.919 14:44:34 -- common/autotest_common.sh@973 -- # kill 2913113 00:03:41.919 14:44:34 -- common/autotest_common.sh@978 -- # wait 2913113 00:03:43.832 14:44:36 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:43.832 14:44:36 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:43.832 14:44:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.832 14:44:36 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:43.832 14:44:36 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:43.832 14:44:36 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.832 14:44:36 -- common/autotest_common.sh@10 -- # set +x 00:03:43.832 14:44:36 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:43.832 14:44:36 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:03:43.832 14:44:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.832 14:44:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.832 14:44:36 -- common/autotest_common.sh@10 -- # set +x 00:03:43.832 ************************************ 00:03:43.832 START TEST env 00:03:43.832 ************************************ 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:03:43.832 * Looking for test storage... 00:03:43.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:43.832 14:44:36 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:43.832 14:44:36 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:43.832 14:44:36 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:43.832 14:44:36 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:43.832 14:44:36 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:43.832 14:44:36 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:43.832 14:44:36 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:43.832 14:44:36 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:43.832 14:44:36 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:43.832 14:44:36 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:43.832 14:44:36 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:43.832 14:44:36 env -- scripts/common.sh@344 -- # case "$op" in 00:03:43.832 14:44:36 env -- scripts/common.sh@345 -- # : 1 00:03:43.832 14:44:36 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:43.832 14:44:36 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:43.832 14:44:36 env -- scripts/common.sh@365 -- # decimal 1 00:03:43.832 14:44:36 env -- scripts/common.sh@353 -- # local d=1 00:03:43.832 14:44:36 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:43.832 14:44:36 env -- scripts/common.sh@355 -- # echo 1 00:03:43.832 14:44:36 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:43.832 14:44:36 env -- scripts/common.sh@366 -- # decimal 2 00:03:43.832 14:44:36 env -- scripts/common.sh@353 -- # local d=2 00:03:43.832 14:44:36 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:43.832 14:44:36 env -- scripts/common.sh@355 -- # echo 2 00:03:43.832 14:44:36 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:43.832 14:44:36 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:43.832 14:44:36 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:43.832 14:44:36 env -- scripts/common.sh@368 -- # return 0 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.832 --rc genhtml_branch_coverage=1 00:03:43.832 --rc genhtml_function_coverage=1 00:03:43.832 --rc genhtml_legend=1 00:03:43.832 --rc geninfo_all_blocks=1 00:03:43.832 --rc geninfo_unexecuted_blocks=1 00:03:43.832 00:03:43.832 ' 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.832 --rc genhtml_branch_coverage=1 00:03:43.832 --rc genhtml_function_coverage=1 00:03:43.832 --rc genhtml_legend=1 00:03:43.832 --rc geninfo_all_blocks=1 00:03:43.832 --rc geninfo_unexecuted_blocks=1 00:03:43.832 00:03:43.832 ' 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.832 --rc genhtml_branch_coverage=1 00:03:43.832 --rc genhtml_function_coverage=1 00:03:43.832 --rc genhtml_legend=1 00:03:43.832 --rc geninfo_all_blocks=1 00:03:43.832 --rc geninfo_unexecuted_blocks=1 00:03:43.832 00:03:43.832 ' 00:03:43.832 14:44:36 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:43.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:43.832 --rc genhtml_branch_coverage=1 00:03:43.832 --rc genhtml_function_coverage=1 00:03:43.832 --rc genhtml_legend=1 00:03:43.833 --rc geninfo_all_blocks=1 00:03:43.833 --rc geninfo_unexecuted_blocks=1 00:03:43.833 00:03:43.833 ' 00:03:43.833 14:44:36 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:03:43.833 14:44:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:43.833 14:44:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:43.833 14:44:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:43.833 ************************************ 00:03:43.833 START TEST env_memory 00:03:43.833 ************************************ 00:03:43.833 14:44:36 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:03:43.833 00:03:43.833 00:03:43.833 CUnit - A unit testing framework for C - Version 2.1-3 00:03:43.833 http://cunit.sourceforge.net/ 00:03:43.833 00:03:43.833 00:03:43.833 Suite: memory 00:03:43.833 Test: alloc and free memory map ...[2024-12-11 14:44:36.805640] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:43.833 passed 00:03:43.833 Test: mem map translation ...[2024-12-11 14:44:36.823724] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:43.833 [2024-12-11 14:44:36.823739] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:43.833 [2024-12-11 14:44:36.823773] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:43.833 [2024-12-11 14:44:36.823779] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:43.833 passed 00:03:43.833 Test: mem map registration ...[2024-12-11 14:44:36.860434] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:43.833 [2024-12-11 14:44:36.860449] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:43.833 passed 00:03:44.093 Test: mem map adjacent registrations ...passed 00:03:44.093 00:03:44.093 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.093 suites 1 1 n/a 0 0 00:03:44.093 tests 4 4 4 0 0 00:03:44.093 asserts 152 152 152 0 n/a 00:03:44.093 00:03:44.093 Elapsed time = 0.137 seconds 00:03:44.093 00:03:44.093 real 0m0.150s 00:03:44.093 user 0m0.140s 00:03:44.093 sys 0m0.009s 00:03:44.093 14:44:36 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:44.093 14:44:36 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:44.093 ************************************ 00:03:44.093 END TEST env_memory 00:03:44.093 ************************************ 00:03:44.093 14:44:36 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:03:44.093 14:44:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:44.093 14:44:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:44.093 14:44:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:44.093 ************************************ 00:03:44.093 START TEST env_vtophys 00:03:44.093 ************************************ 00:03:44.093 14:44:36 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:03:44.093 EAL: lib.eal log level changed from notice to debug 00:03:44.093 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.093 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.093 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.093 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.093 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.093 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.093 EAL: Detected lcore 6 as core 6 on socket 0 00:03:44.093 EAL: Detected lcore 7 as core 8 on socket 0 00:03:44.093 EAL: Detected lcore 8 as core 9 on socket 0 00:03:44.093 EAL: Detected lcore 9 as core 10 on socket 0 00:03:44.093 EAL: Detected lcore 10 as core 11 on socket 0 00:03:44.093 EAL: Detected lcore 11 as core 12 on socket 0 00:03:44.093 EAL: Detected lcore 12 as core 13 on socket 0 00:03:44.093 EAL: Detected lcore 13 as core 16 on socket 0 00:03:44.093 EAL: Detected lcore 14 as core 17 on socket 0 00:03:44.093 EAL: Detected lcore 15 as core 18 on socket 0 00:03:44.093 EAL: Detected lcore 16 as core 19 on socket 0 00:03:44.093 EAL: Detected lcore 17 as core 20 on socket 0 00:03:44.093 EAL: Detected lcore 18 as core 21 on socket 0 00:03:44.093 EAL: Detected lcore 19 as core 25 on socket 0 00:03:44.093 EAL: Detected lcore 20 as core 26 on socket 0 00:03:44.093 EAL: Detected lcore 21 as core 27 on socket 0 00:03:44.093 EAL: Detected lcore 22 as core 28 on socket 0 00:03:44.093 EAL: Detected lcore 23 as core 29 on socket 0 00:03:44.093 EAL: Detected lcore 24 as core 0 on socket 1 00:03:44.093 EAL: Detected lcore 25 as core 1 on socket 1 00:03:44.093 EAL: Detected lcore 26 as core 2 on socket 1 00:03:44.093 EAL: Detected lcore 27 as core 3 on socket 1 00:03:44.093 EAL: Detected lcore 28 as core 4 on socket 1 00:03:44.093 EAL: Detected lcore 29 as core 5 on socket 1 00:03:44.093 EAL: Detected lcore 30 as core 6 on socket 1 00:03:44.093 EAL: Detected lcore 31 as core 9 on socket 1 00:03:44.093 EAL: Detected lcore 32 as core 10 on socket 1 00:03:44.093 EAL: Detected lcore 33 as core 11 on socket 1 00:03:44.093 EAL: Detected lcore 34 as core 12 on socket 1 00:03:44.093 EAL: Detected lcore 35 as core 13 on socket 1 00:03:44.093 EAL: Detected lcore 36 as core 16 on socket 1 00:03:44.093 EAL: Detected lcore 37 as core 17 on socket 1 00:03:44.093 EAL: Detected lcore 38 as core 18 on socket 1 00:03:44.093 EAL: Detected lcore 39 as core 19 on socket 1 00:03:44.093 EAL: Detected lcore 40 as core 20 on socket 1 00:03:44.093 EAL: Detected lcore 41 as core 21 on socket 1 00:03:44.093 EAL: Detected lcore 42 as core 24 on socket 1 00:03:44.093 EAL: Detected lcore 43 as core 25 on socket 1 00:03:44.093 EAL: Detected lcore 44 as core 26 on socket 1 00:03:44.093 EAL: Detected lcore 45 as core 27 on socket 1 00:03:44.093 EAL: Detected lcore 46 as core 28 on socket 1 00:03:44.093 EAL: Detected lcore 47 as core 29 on socket 1 00:03:44.093 EAL: Detected lcore 48 as core 0 on socket 0 00:03:44.093 EAL: Detected lcore 49 as core 1 on socket 0 00:03:44.093 EAL: Detected lcore 50 as core 2 on socket 0 00:03:44.093 EAL: Detected lcore 51 as core 3 on socket 0 00:03:44.093 EAL: Detected lcore 52 as core 4 on socket 0 00:03:44.093 EAL: Detected lcore 53 as core 5 on socket 0 00:03:44.093 EAL: Detected lcore 54 as core 6 on socket 0 00:03:44.093 EAL: Detected lcore 55 as core 8 on socket 0 00:03:44.093 EAL: Detected lcore 56 as core 9 on socket 0 00:03:44.093 EAL: Detected lcore 57 as core 10 on socket 0 00:03:44.093 EAL: Detected lcore 58 as core 11 on socket 0 00:03:44.093 EAL: Detected lcore 59 as core 12 on socket 0 00:03:44.093 EAL: Detected lcore 60 as core 13 on socket 0 00:03:44.093 EAL: Detected lcore 61 as core 16 on socket 0 00:03:44.093 EAL: Detected lcore 62 as core 17 on socket 0 00:03:44.093 EAL: Detected lcore 63 as core 18 on socket 0 00:03:44.093 EAL: Detected lcore 64 as core 19 on socket 0 00:03:44.093 EAL: Detected lcore 65 as core 20 on socket 0 00:03:44.093 EAL: Detected lcore 66 as core 21 on socket 0 00:03:44.093 EAL: Detected lcore 67 as core 25 on socket 0 00:03:44.093 EAL: Detected lcore 68 as core 26 on socket 0 00:03:44.093 EAL: Detected lcore 69 as core 27 on socket 0 00:03:44.093 EAL: Detected lcore 70 as core 28 on socket 0 00:03:44.093 EAL: Detected lcore 71 as core 29 on socket 0 00:03:44.093 EAL: Detected lcore 72 as core 0 on socket 1 00:03:44.093 EAL: Detected lcore 73 as core 1 on socket 1 00:03:44.093 EAL: Detected lcore 74 as core 2 on socket 1 00:03:44.093 EAL: Detected lcore 75 as core 3 on socket 1 00:03:44.093 EAL: Detected lcore 76 as core 4 on socket 1 00:03:44.093 EAL: Detected lcore 77 as core 5 on socket 1 00:03:44.093 EAL: Detected lcore 78 as core 6 on socket 1 00:03:44.093 EAL: Detected lcore 79 as core 9 on socket 1 00:03:44.093 EAL: Detected lcore 80 as core 10 on socket 1 00:03:44.093 EAL: Detected lcore 81 as core 11 on socket 1 00:03:44.093 EAL: Detected lcore 82 as core 12 on socket 1 00:03:44.093 EAL: Detected lcore 83 as core 13 on socket 1 00:03:44.093 EAL: Detected lcore 84 as core 16 on socket 1 00:03:44.093 EAL: Detected lcore 85 as core 17 on socket 1 00:03:44.093 EAL: Detected lcore 86 as core 18 on socket 1 00:03:44.093 EAL: Detected lcore 87 as core 19 on socket 1 00:03:44.093 EAL: Detected lcore 88 as core 20 on socket 1 00:03:44.093 EAL: Detected lcore 89 as core 21 on socket 1 00:03:44.093 EAL: Detected lcore 90 as core 24 on socket 1 00:03:44.093 EAL: Detected lcore 91 as core 25 on socket 1 00:03:44.093 EAL: Detected lcore 92 as core 26 on socket 1 00:03:44.093 EAL: Detected lcore 93 as core 27 on socket 1 00:03:44.093 EAL: Detected lcore 94 as core 28 on socket 1 00:03:44.093 EAL: Detected lcore 95 as core 29 on socket 1 00:03:44.093 EAL: Maximum logical cores by configuration: 128 00:03:44.093 EAL: Detected CPU lcores: 96 00:03:44.093 EAL: Detected NUMA nodes: 2 00:03:44.093 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:44.093 EAL: Detected shared linkage of DPDK 00:03:44.093 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.093 EAL: Bus pci wants IOVA as 'DC' 00:03:44.093 EAL: Buses did not request a specific IOVA mode. 00:03:44.093 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:44.093 EAL: Selected IOVA mode 'VA' 00:03:44.093 EAL: Probing VFIO support... 00:03:44.093 EAL: IOMMU type 1 (Type 1) is supported 00:03:44.093 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:44.093 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:44.093 EAL: VFIO support initialized 00:03:44.093 EAL: Ask a virtual area of 0x2e000 bytes 00:03:44.093 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:44.093 EAL: Setting up physically contiguous memory... 00:03:44.093 EAL: Setting maximum number of open files to 524288 00:03:44.093 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:44.093 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:44.093 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:44.093 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.093 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:44.093 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.093 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.093 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:44.093 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:44.093 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.093 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:44.093 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.093 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.093 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:44.093 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:44.093 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.093 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:44.093 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:44.094 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.094 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:44.094 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:44.094 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:44.094 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.094 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:44.094 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:44.094 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.094 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:44.094 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:44.094 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.094 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:44.094 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:44.094 EAL: Ask a virtual area of 0x61000 bytes 00:03:44.094 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:44.094 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:44.094 EAL: Ask a virtual area of 0x400000000 bytes 00:03:44.094 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:44.094 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:44.094 EAL: Hugepages will be freed exactly as allocated. 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: TSC frequency is ~2300000 KHz 00:03:44.094 EAL: Main lcore 0 is ready (tid=7f3e4e60aa00;cpuset=[0]) 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 0 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 2MB 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:44.094 EAL: Mem event callback 'spdk:(nil)' registered 00:03:44.094 00:03:44.094 00:03:44.094 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.094 http://cunit.sourceforge.net/ 00:03:44.094 00:03:44.094 00:03:44.094 Suite: components_suite 00:03:44.094 Test: vtophys_malloc_test ...passed 00:03:44.094 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 4MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 4MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 6MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 6MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 10MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 10MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 18MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 18MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 34MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 34MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.094 EAL: Restoring previous memory policy: 4 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was expanded by 66MB 00:03:44.094 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.094 EAL: request: mp_malloc_sync 00:03:44.094 EAL: No shared files mode enabled, IPC is disabled 00:03:44.094 EAL: Heap on socket 0 was shrunk by 66MB 00:03:44.094 EAL: Trying to obtain current memory policy. 00:03:44.094 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.353 EAL: Restoring previous memory policy: 4 00:03:44.353 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.353 EAL: request: mp_malloc_sync 00:03:44.353 EAL: No shared files mode enabled, IPC is disabled 00:03:44.353 EAL: Heap on socket 0 was expanded by 130MB 00:03:44.353 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.353 EAL: request: mp_malloc_sync 00:03:44.353 EAL: No shared files mode enabled, IPC is disabled 00:03:44.353 EAL: Heap on socket 0 was shrunk by 130MB 00:03:44.353 EAL: Trying to obtain current memory policy. 00:03:44.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.353 EAL: Restoring previous memory policy: 4 00:03:44.353 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.353 EAL: request: mp_malloc_sync 00:03:44.353 EAL: No shared files mode enabled, IPC is disabled 00:03:44.353 EAL: Heap on socket 0 was expanded by 258MB 00:03:44.353 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.353 EAL: request: mp_malloc_sync 00:03:44.353 EAL: No shared files mode enabled, IPC is disabled 00:03:44.353 EAL: Heap on socket 0 was shrunk by 258MB 00:03:44.353 EAL: Trying to obtain current memory policy. 00:03:44.353 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.613 EAL: Restoring previous memory policy: 4 00:03:44.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.613 EAL: request: mp_malloc_sync 00:03:44.613 EAL: No shared files mode enabled, IPC is disabled 00:03:44.613 EAL: Heap on socket 0 was expanded by 514MB 00:03:44.613 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.613 EAL: request: mp_malloc_sync 00:03:44.613 EAL: No shared files mode enabled, IPC is disabled 00:03:44.613 EAL: Heap on socket 0 was shrunk by 514MB 00:03:44.613 EAL: Trying to obtain current memory policy. 00:03:44.613 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:44.872 EAL: Restoring previous memory policy: 4 00:03:44.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:44.872 EAL: request: mp_malloc_sync 00:03:44.872 EAL: No shared files mode enabled, IPC is disabled 00:03:44.872 EAL: Heap on socket 0 was expanded by 1026MB 00:03:44.872 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.131 EAL: request: mp_malloc_sync 00:03:45.131 EAL: No shared files mode enabled, IPC is disabled 00:03:45.131 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:45.131 passed 00:03:45.131 00:03:45.131 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.131 suites 1 1 n/a 0 0 00:03:45.131 tests 2 2 2 0 0 00:03:45.131 asserts 497 497 497 0 n/a 00:03:45.131 00:03:45.131 Elapsed time = 0.967 seconds 00:03:45.131 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.131 EAL: request: mp_malloc_sync 00:03:45.131 EAL: No shared files mode enabled, IPC is disabled 00:03:45.131 EAL: Heap on socket 0 was shrunk by 2MB 00:03:45.131 EAL: No shared files mode enabled, IPC is disabled 00:03:45.131 EAL: No shared files mode enabled, IPC is disabled 00:03:45.131 EAL: No shared files mode enabled, IPC is disabled 00:03:45.131 00:03:45.131 real 0m1.104s 00:03:45.131 user 0m0.641s 00:03:45.131 sys 0m0.433s 00:03:45.131 14:44:38 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.131 14:44:38 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:45.131 ************************************ 00:03:45.131 END TEST env_vtophys 00:03:45.131 ************************************ 00:03:45.131 14:44:38 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:03:45.131 14:44:38 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:45.131 14:44:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.131 14:44:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.131 ************************************ 00:03:45.131 START TEST env_pci 00:03:45.131 ************************************ 00:03:45.131 14:44:38 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:03:45.131 00:03:45.131 00:03:45.131 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.131 http://cunit.sourceforge.net/ 00:03:45.131 00:03:45.131 00:03:45.131 Suite: pci 00:03:45.131 Test: pci_hook ...[2024-12-11 14:44:38.170359] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2914401 has claimed it 00:03:45.390 EAL: Cannot find device (10000:00:01.0) 00:03:45.390 EAL: Failed to attach device on primary process 00:03:45.390 passed 00:03:45.390 00:03:45.390 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.390 suites 1 1 n/a 0 0 00:03:45.390 tests 1 1 1 0 0 00:03:45.390 asserts 25 25 25 0 n/a 00:03:45.390 00:03:45.390 Elapsed time = 0.027 seconds 00:03:45.390 00:03:45.390 real 0m0.047s 00:03:45.390 user 0m0.014s 00:03:45.390 sys 0m0.033s 00:03:45.390 14:44:38 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:45.390 14:44:38 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:45.390 ************************************ 00:03:45.390 END TEST env_pci 00:03:45.390 ************************************ 00:03:45.390 14:44:38 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:45.390 14:44:38 env -- env/env.sh@15 -- # uname 00:03:45.390 14:44:38 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:45.390 14:44:38 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:45.390 14:44:38 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.390 14:44:38 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:45.390 14:44:38 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:45.390 14:44:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.390 ************************************ 00:03:45.390 START TEST env_dpdk_post_init 00:03:45.390 ************************************ 00:03:45.390 14:44:38 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:45.390 EAL: Detected CPU lcores: 96 00:03:45.390 EAL: Detected NUMA nodes: 2 00:03:45.390 EAL: Detected shared linkage of DPDK 00:03:45.390 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:45.390 EAL: Selected IOVA mode 'VA' 00:03:45.390 EAL: VFIO support initialized 00:03:45.390 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:45.390 EAL: Using IOMMU type 1 (Type 1) 00:03:45.390 EAL: Ignore mapping IO port bar(1) 00:03:45.390 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:45.390 EAL: Ignore mapping IO port bar(1) 00:03:45.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:45.391 EAL: Ignore mapping IO port bar(1) 00:03:45.391 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:45.650 EAL: Ignore mapping IO port bar(1) 00:03:45.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:45.650 EAL: Ignore mapping IO port bar(1) 00:03:45.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:45.650 EAL: Ignore mapping IO port bar(1) 00:03:45.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:45.650 EAL: Ignore mapping IO port bar(1) 00:03:45.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:45.650 EAL: Ignore mapping IO port bar(1) 00:03:45.650 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:46.218 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:46.218 EAL: Ignore mapping IO port bar(1) 00:03:46.218 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:46.218 EAL: Ignore mapping IO port bar(1) 00:03:46.218 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:46.477 EAL: Ignore mapping IO port bar(1) 00:03:46.477 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:49.764 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:49.764 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:49.764 Starting DPDK initialization... 00:03:49.764 Starting SPDK post initialization... 00:03:49.764 SPDK NVMe probe 00:03:49.764 Attaching to 0000:5e:00.0 00:03:49.764 Attached to 0000:5e:00.0 00:03:49.764 Cleaning up... 00:03:49.764 00:03:49.764 real 0m4.377s 00:03:49.764 user 0m2.977s 00:03:49.764 sys 0m0.472s 00:03:49.764 14:44:42 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.764 14:44:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:49.764 ************************************ 00:03:49.764 END TEST env_dpdk_post_init 00:03:49.764 ************************************ 00:03:49.765 14:44:42 env -- env/env.sh@26 -- # uname 00:03:49.765 14:44:42 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:49.765 14:44:42 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.765 14:44:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:49.765 14:44:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:49.765 14:44:42 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.765 ************************************ 00:03:49.765 START TEST env_mem_callbacks 00:03:49.765 ************************************ 00:03:49.765 14:44:42 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:03:49.765 EAL: Detected CPU lcores: 96 00:03:49.765 EAL: Detected NUMA nodes: 2 00:03:49.765 EAL: Detected shared linkage of DPDK 00:03:49.765 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:49.765 EAL: Selected IOVA mode 'VA' 00:03:49.765 EAL: VFIO support initialized 00:03:49.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:49.765 00:03:49.765 00:03:49.765 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.765 http://cunit.sourceforge.net/ 00:03:49.765 00:03:49.765 00:03:49.765 Suite: memory 00:03:49.765 Test: test ... 00:03:49.765 register 0x200000200000 2097152 00:03:49.765 malloc 3145728 00:03:49.765 register 0x200000400000 4194304 00:03:49.765 buf 0x200000500000 len 3145728 PASSED 00:03:49.765 malloc 64 00:03:49.765 buf 0x2000004fff40 len 64 PASSED 00:03:49.765 malloc 4194304 00:03:49.765 register 0x200000800000 6291456 00:03:49.765 buf 0x200000a00000 len 4194304 PASSED 00:03:49.765 free 0x200000500000 3145728 00:03:49.765 free 0x2000004fff40 64 00:03:49.765 unregister 0x200000400000 4194304 PASSED 00:03:49.765 free 0x200000a00000 4194304 00:03:49.765 unregister 0x200000800000 6291456 PASSED 00:03:49.765 malloc 8388608 00:03:49.765 register 0x200000400000 10485760 00:03:49.765 buf 0x200000600000 len 8388608 PASSED 00:03:49.765 free 0x200000600000 8388608 00:03:49.765 unregister 0x200000400000 10485760 PASSED 00:03:49.765 passed 00:03:49.765 00:03:49.765 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.765 suites 1 1 n/a 0 0 00:03:49.765 tests 1 1 1 0 0 00:03:49.765 asserts 15 15 15 0 n/a 00:03:49.765 00:03:49.765 Elapsed time = 0.007 seconds 00:03:49.765 00:03:49.765 real 0m0.057s 00:03:49.765 user 0m0.020s 00:03:49.765 sys 0m0.037s 00:03:49.765 14:44:42 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:49.765 14:44:42 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:49.765 ************************************ 00:03:49.765 END TEST env_mem_callbacks 00:03:49.765 ************************************ 00:03:50.024 00:03:50.024 real 0m6.265s 00:03:50.024 user 0m4.049s 00:03:50.024 sys 0m1.296s 00:03:50.024 14:44:42 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.024 14:44:42 env -- common/autotest_common.sh@10 -- # set +x 00:03:50.024 ************************************ 00:03:50.024 END TEST env 00:03:50.024 ************************************ 00:03:50.024 14:44:42 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:03:50.024 14:44:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.024 14:44:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.024 14:44:42 -- common/autotest_common.sh@10 -- # set +x 00:03:50.024 ************************************ 00:03:50.024 START TEST rpc 00:03:50.024 ************************************ 00:03:50.024 14:44:42 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:03:50.024 * Looking for test storage... 00:03:50.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:03:50.024 14:44:42 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:50.024 14:44:42 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:50.024 14:44:42 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:50.024 14:44:43 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:50.024 14:44:43 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:50.024 14:44:43 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:50.024 14:44:43 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:50.024 14:44:43 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:50.024 14:44:43 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:50.024 14:44:43 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:50.024 14:44:43 rpc -- scripts/common.sh@345 -- # : 1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:50.024 14:44:43 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:50.024 14:44:43 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@353 -- # local d=1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:50.024 14:44:43 rpc -- scripts/common.sh@355 -- # echo 1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:50.024 14:44:43 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@353 -- # local d=2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:50.024 14:44:43 rpc -- scripts/common.sh@355 -- # echo 2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:50.024 14:44:43 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:50.024 14:44:43 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:50.024 14:44:43 rpc -- scripts/common.sh@368 -- # return 0 00:03:50.024 14:44:43 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:50.024 14:44:43 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:50.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.024 --rc genhtml_branch_coverage=1 00:03:50.024 --rc genhtml_function_coverage=1 00:03:50.025 --rc genhtml_legend=1 00:03:50.025 --rc geninfo_all_blocks=1 00:03:50.025 --rc geninfo_unexecuted_blocks=1 00:03:50.025 00:03:50.025 ' 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.025 --rc genhtml_branch_coverage=1 00:03:50.025 --rc genhtml_function_coverage=1 00:03:50.025 --rc genhtml_legend=1 00:03:50.025 --rc geninfo_all_blocks=1 00:03:50.025 --rc geninfo_unexecuted_blocks=1 00:03:50.025 00:03:50.025 ' 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.025 --rc genhtml_branch_coverage=1 00:03:50.025 --rc genhtml_function_coverage=1 00:03:50.025 --rc genhtml_legend=1 00:03:50.025 --rc geninfo_all_blocks=1 00:03:50.025 --rc geninfo_unexecuted_blocks=1 00:03:50.025 00:03:50.025 ' 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:50.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:50.025 --rc genhtml_branch_coverage=1 00:03:50.025 --rc genhtml_function_coverage=1 00:03:50.025 --rc genhtml_legend=1 00:03:50.025 --rc geninfo_all_blocks=1 00:03:50.025 --rc geninfo_unexecuted_blocks=1 00:03:50.025 00:03:50.025 ' 00:03:50.025 14:44:43 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2915265 00:03:50.025 14:44:43 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:50.025 14:44:43 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -e bdev 00:03:50.025 14:44:43 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2915265 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@835 -- # '[' -z 2915265 ']' 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:50.025 14:44:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.284 [2024-12-11 14:44:43.119400] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:03:50.284 [2024-12-11 14:44:43.119445] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915265 ] 00:03:50.284 [2024-12-11 14:44:43.191590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.284 [2024-12-11 14:44:43.230039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:50.284 [2024-12-11 14:44:43.230077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2915265' to capture a snapshot of events at runtime. 00:03:50.284 [2024-12-11 14:44:43.230084] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:50.284 [2024-12-11 14:44:43.230090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:50.284 [2024-12-11 14:44:43.230094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2915265 for offline analysis/debug. 00:03:50.284 [2024-12-11 14:44:43.230682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.543 14:44:43 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:50.543 14:44:43 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:50.543 14:44:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:03:50.544 14:44:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:03:50.544 14:44:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:50.544 14:44:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:50.544 14:44:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.544 14:44:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.544 14:44:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 ************************************ 00:03:50.544 START TEST rpc_integrity 00:03:50.544 ************************************ 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.544 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:50.544 { 00:03:50.544 "name": "Malloc0", 00:03:50.544 "aliases": [ 00:03:50.544 "825930fe-5f94-4d5b-b562-0499c8e7068c" 00:03:50.544 ], 00:03:50.544 "product_name": "Malloc disk", 00:03:50.544 "block_size": 512, 00:03:50.544 "num_blocks": 16384, 00:03:50.544 "uuid": "825930fe-5f94-4d5b-b562-0499c8e7068c", 00:03:50.544 "assigned_rate_limits": { 00:03:50.544 "rw_ios_per_sec": 0, 00:03:50.544 "rw_mbytes_per_sec": 0, 00:03:50.544 "r_mbytes_per_sec": 0, 00:03:50.544 "w_mbytes_per_sec": 0 00:03:50.544 }, 00:03:50.544 "claimed": false, 00:03:50.544 "zoned": false, 00:03:50.544 "supported_io_types": { 00:03:50.544 "read": true, 00:03:50.544 "write": true, 00:03:50.544 "unmap": true, 00:03:50.544 "flush": true, 00:03:50.544 "reset": true, 00:03:50.544 "nvme_admin": false, 00:03:50.544 "nvme_io": false, 00:03:50.544 "nvme_io_md": false, 00:03:50.544 "write_zeroes": true, 00:03:50.544 "zcopy": true, 00:03:50.544 "get_zone_info": false, 00:03:50.544 "zone_management": false, 00:03:50.544 "zone_append": false, 00:03:50.544 "compare": false, 00:03:50.544 "compare_and_write": false, 00:03:50.544 "abort": true, 00:03:50.544 "seek_hole": false, 00:03:50.544 "seek_data": false, 00:03:50.544 "copy": true, 00:03:50.544 "nvme_iov_md": false 00:03:50.544 }, 00:03:50.544 "memory_domains": [ 00:03:50.544 { 00:03:50.544 "dma_device_id": "system", 00:03:50.544 "dma_device_type": 1 00:03:50.544 }, 00:03:50.544 { 00:03:50.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.544 "dma_device_type": 2 00:03:50.544 } 00:03:50.544 ], 00:03:50.544 "driver_specific": {} 00:03:50.544 } 00:03:50.544 ]' 00:03:50.544 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.803 [2024-12-11 14:44:43.623365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:50.803 [2024-12-11 14:44:43.623398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:50.803 [2024-12-11 14:44:43.623411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x154e140 00:03:50.803 [2024-12-11 14:44:43.623417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:50.803 [2024-12-11 14:44:43.624538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:50.803 [2024-12-11 14:44:43.624560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:50.803 Passthru0 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:50.803 { 00:03:50.803 "name": "Malloc0", 00:03:50.803 "aliases": [ 00:03:50.803 "825930fe-5f94-4d5b-b562-0499c8e7068c" 00:03:50.803 ], 00:03:50.803 "product_name": "Malloc disk", 00:03:50.803 "block_size": 512, 00:03:50.803 "num_blocks": 16384, 00:03:50.803 "uuid": "825930fe-5f94-4d5b-b562-0499c8e7068c", 00:03:50.803 "assigned_rate_limits": { 00:03:50.803 "rw_ios_per_sec": 0, 00:03:50.803 "rw_mbytes_per_sec": 0, 00:03:50.803 "r_mbytes_per_sec": 0, 00:03:50.803 "w_mbytes_per_sec": 0 00:03:50.803 }, 00:03:50.803 "claimed": true, 00:03:50.803 "claim_type": "exclusive_write", 00:03:50.803 "zoned": false, 00:03:50.803 "supported_io_types": { 00:03:50.803 "read": true, 00:03:50.803 "write": true, 00:03:50.803 "unmap": true, 00:03:50.803 "flush": true, 00:03:50.803 "reset": true, 00:03:50.803 "nvme_admin": false, 00:03:50.803 "nvme_io": false, 00:03:50.803 "nvme_io_md": false, 00:03:50.803 "write_zeroes": true, 00:03:50.803 "zcopy": true, 00:03:50.803 "get_zone_info": false, 00:03:50.803 "zone_management": false, 00:03:50.803 "zone_append": false, 00:03:50.803 "compare": false, 00:03:50.803 "compare_and_write": false, 00:03:50.803 "abort": true, 00:03:50.803 "seek_hole": false, 00:03:50.803 "seek_data": false, 00:03:50.803 "copy": true, 00:03:50.803 "nvme_iov_md": false 00:03:50.803 }, 00:03:50.803 "memory_domains": [ 00:03:50.803 { 00:03:50.803 "dma_device_id": "system", 00:03:50.803 "dma_device_type": 1 00:03:50.803 }, 00:03:50.803 { 00:03:50.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.803 "dma_device_type": 2 00:03:50.803 } 00:03:50.803 ], 00:03:50.803 "driver_specific": {} 00:03:50.803 }, 00:03:50.803 { 00:03:50.803 "name": "Passthru0", 00:03:50.803 "aliases": [ 00:03:50.803 "a60cd005-aaac-505c-9078-f034efb2af2a" 00:03:50.803 ], 00:03:50.803 "product_name": "passthru", 00:03:50.803 "block_size": 512, 00:03:50.803 "num_blocks": 16384, 00:03:50.803 "uuid": "a60cd005-aaac-505c-9078-f034efb2af2a", 00:03:50.803 "assigned_rate_limits": { 00:03:50.803 "rw_ios_per_sec": 0, 00:03:50.803 "rw_mbytes_per_sec": 0, 00:03:50.803 "r_mbytes_per_sec": 0, 00:03:50.803 "w_mbytes_per_sec": 0 00:03:50.803 }, 00:03:50.803 "claimed": false, 00:03:50.803 "zoned": false, 00:03:50.803 "supported_io_types": { 00:03:50.803 "read": true, 00:03:50.803 "write": true, 00:03:50.803 "unmap": true, 00:03:50.803 "flush": true, 00:03:50.803 "reset": true, 00:03:50.803 "nvme_admin": false, 00:03:50.803 "nvme_io": false, 00:03:50.803 "nvme_io_md": false, 00:03:50.803 "write_zeroes": true, 00:03:50.803 "zcopy": true, 00:03:50.803 "get_zone_info": false, 00:03:50.803 "zone_management": false, 00:03:50.803 "zone_append": false, 00:03:50.803 "compare": false, 00:03:50.803 "compare_and_write": false, 00:03:50.803 "abort": true, 00:03:50.803 "seek_hole": false, 00:03:50.803 "seek_data": false, 00:03:50.803 "copy": true, 00:03:50.803 "nvme_iov_md": false 00:03:50.803 }, 00:03:50.803 "memory_domains": [ 00:03:50.803 { 00:03:50.803 "dma_device_id": "system", 00:03:50.803 "dma_device_type": 1 00:03:50.803 }, 00:03:50.803 { 00:03:50.803 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:50.803 "dma_device_type": 2 00:03:50.803 } 00:03:50.803 ], 00:03:50.803 "driver_specific": { 00:03:50.803 "passthru": { 00:03:50.803 "name": "Passthru0", 00:03:50.803 "base_bdev_name": "Malloc0" 00:03:50.803 } 00:03:50.803 } 00:03:50.803 } 00:03:50.803 ]' 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.803 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:50.803 14:44:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:50.803 00:03:50.803 real 0m0.270s 00:03:50.804 user 0m0.169s 00:03:50.804 sys 0m0.033s 00:03:50.804 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:50.804 14:44:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 END TEST rpc_integrity 00:03:50.804 ************************************ 00:03:50.804 14:44:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:50.804 14:44:43 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:50.804 14:44:43 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:50.804 14:44:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 ************************************ 00:03:50.804 START TEST rpc_plugins 00:03:50.804 ************************************ 00:03:50.804 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:50.804 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:50.804 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:50.804 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:50.804 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:50.804 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:51.062 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:51.062 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.062 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.062 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.062 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:51.062 { 00:03:51.062 "name": "Malloc1", 00:03:51.062 "aliases": [ 00:03:51.062 "c66bc194-b17a-45e2-88fe-a2feb8473811" 00:03:51.062 ], 00:03:51.062 "product_name": "Malloc disk", 00:03:51.062 "block_size": 4096, 00:03:51.062 "num_blocks": 256, 00:03:51.062 "uuid": "c66bc194-b17a-45e2-88fe-a2feb8473811", 00:03:51.062 "assigned_rate_limits": { 00:03:51.062 "rw_ios_per_sec": 0, 00:03:51.062 "rw_mbytes_per_sec": 0, 00:03:51.062 "r_mbytes_per_sec": 0, 00:03:51.062 "w_mbytes_per_sec": 0 00:03:51.062 }, 00:03:51.062 "claimed": false, 00:03:51.062 "zoned": false, 00:03:51.062 "supported_io_types": { 00:03:51.062 "read": true, 00:03:51.062 "write": true, 00:03:51.062 "unmap": true, 00:03:51.062 "flush": true, 00:03:51.062 "reset": true, 00:03:51.062 "nvme_admin": false, 00:03:51.062 "nvme_io": false, 00:03:51.062 "nvme_io_md": false, 00:03:51.062 "write_zeroes": true, 00:03:51.062 "zcopy": true, 00:03:51.063 "get_zone_info": false, 00:03:51.063 "zone_management": false, 00:03:51.063 "zone_append": false, 00:03:51.063 "compare": false, 00:03:51.063 "compare_and_write": false, 00:03:51.063 "abort": true, 00:03:51.063 "seek_hole": false, 00:03:51.063 "seek_data": false, 00:03:51.063 "copy": true, 00:03:51.063 "nvme_iov_md": false 00:03:51.063 }, 00:03:51.063 "memory_domains": [ 00:03:51.063 { 00:03:51.063 "dma_device_id": "system", 00:03:51.063 "dma_device_type": 1 00:03:51.063 }, 00:03:51.063 { 00:03:51.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.063 "dma_device_type": 2 00:03:51.063 } 00:03:51.063 ], 00:03:51.063 "driver_specific": {} 00:03:51.063 } 00:03:51.063 ]' 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:51.063 14:44:43 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:51.063 00:03:51.063 real 0m0.146s 00:03:51.063 user 0m0.084s 00:03:51.063 sys 0m0.023s 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.063 14:44:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 ************************************ 00:03:51.063 END TEST rpc_plugins 00:03:51.063 ************************************ 00:03:51.063 14:44:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:51.063 14:44:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.063 14:44:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.063 14:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 ************************************ 00:03:51.063 START TEST rpc_trace_cmd_test 00:03:51.063 ************************************ 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:51.063 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2915265", 00:03:51.063 "tpoint_group_mask": "0x8", 00:03:51.063 "iscsi_conn": { 00:03:51.063 "mask": "0x2", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "scsi": { 00:03:51.063 "mask": "0x4", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "bdev": { 00:03:51.063 "mask": "0x8", 00:03:51.063 "tpoint_mask": "0xffffffffffffffff" 00:03:51.063 }, 00:03:51.063 "nvmf_rdma": { 00:03:51.063 "mask": "0x10", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "nvmf_tcp": { 00:03:51.063 "mask": "0x20", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "ftl": { 00:03:51.063 "mask": "0x40", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "blobfs": { 00:03:51.063 "mask": "0x80", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "dsa": { 00:03:51.063 "mask": "0x200", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "thread": { 00:03:51.063 "mask": "0x400", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "nvme_pcie": { 00:03:51.063 "mask": "0x800", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "iaa": { 00:03:51.063 "mask": "0x1000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "nvme_tcp": { 00:03:51.063 "mask": "0x2000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "bdev_nvme": { 00:03:51.063 "mask": "0x4000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "sock": { 00:03:51.063 "mask": "0x8000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "blob": { 00:03:51.063 "mask": "0x10000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "bdev_raid": { 00:03:51.063 "mask": "0x20000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 }, 00:03:51.063 "scheduler": { 00:03:51.063 "mask": "0x40000", 00:03:51.063 "tpoint_mask": "0x0" 00:03:51.063 } 00:03:51.063 }' 00:03:51.063 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:51.322 00:03:51.322 real 0m0.224s 00:03:51.322 user 0m0.190s 00:03:51.322 sys 0m0.027s 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.322 14:44:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:51.322 ************************************ 00:03:51.322 END TEST rpc_trace_cmd_test 00:03:51.322 ************************************ 00:03:51.322 14:44:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:51.322 14:44:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:51.322 14:44:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:51.322 14:44:44 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:51.322 14:44:44 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:51.322 14:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:51.322 ************************************ 00:03:51.322 START TEST rpc_daemon_integrity 00:03:51.322 ************************************ 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:51.322 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.581 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:51.581 { 00:03:51.581 "name": "Malloc2", 00:03:51.581 "aliases": [ 00:03:51.581 "6a0e2699-dd9a-49dd-9cc8-7f0dd373c5d0" 00:03:51.581 ], 00:03:51.581 "product_name": "Malloc disk", 00:03:51.581 "block_size": 512, 00:03:51.581 "num_blocks": 16384, 00:03:51.581 "uuid": "6a0e2699-dd9a-49dd-9cc8-7f0dd373c5d0", 00:03:51.581 "assigned_rate_limits": { 00:03:51.581 "rw_ios_per_sec": 0, 00:03:51.581 "rw_mbytes_per_sec": 0, 00:03:51.581 "r_mbytes_per_sec": 0, 00:03:51.581 "w_mbytes_per_sec": 0 00:03:51.581 }, 00:03:51.581 "claimed": false, 00:03:51.581 "zoned": false, 00:03:51.581 "supported_io_types": { 00:03:51.581 "read": true, 00:03:51.581 "write": true, 00:03:51.581 "unmap": true, 00:03:51.581 "flush": true, 00:03:51.581 "reset": true, 00:03:51.581 "nvme_admin": false, 00:03:51.581 "nvme_io": false, 00:03:51.581 "nvme_io_md": false, 00:03:51.581 "write_zeroes": true, 00:03:51.581 "zcopy": true, 00:03:51.581 "get_zone_info": false, 00:03:51.581 "zone_management": false, 00:03:51.581 "zone_append": false, 00:03:51.581 "compare": false, 00:03:51.581 "compare_and_write": false, 00:03:51.581 "abort": true, 00:03:51.581 "seek_hole": false, 00:03:51.582 "seek_data": false, 00:03:51.582 "copy": true, 00:03:51.582 "nvme_iov_md": false 00:03:51.582 }, 00:03:51.582 "memory_domains": [ 00:03:51.582 { 00:03:51.582 "dma_device_id": "system", 00:03:51.582 "dma_device_type": 1 00:03:51.582 }, 00:03:51.582 { 00:03:51.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.582 "dma_device_type": 2 00:03:51.582 } 00:03:51.582 ], 00:03:51.582 "driver_specific": {} 00:03:51.582 } 00:03:51.582 ]' 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 [2024-12-11 14:44:44.469662] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:51.582 [2024-12-11 14:44:44.469690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:51.582 [2024-12-11 14:44:44.469702] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x140c490 00:03:51.582 [2024-12-11 14:44:44.469708] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:51.582 [2024-12-11 14:44:44.470692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:51.582 [2024-12-11 14:44:44.470714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:51.582 Passthru0 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:51.582 { 00:03:51.582 "name": "Malloc2", 00:03:51.582 "aliases": [ 00:03:51.582 "6a0e2699-dd9a-49dd-9cc8-7f0dd373c5d0" 00:03:51.582 ], 00:03:51.582 "product_name": "Malloc disk", 00:03:51.582 "block_size": 512, 00:03:51.582 "num_blocks": 16384, 00:03:51.582 "uuid": "6a0e2699-dd9a-49dd-9cc8-7f0dd373c5d0", 00:03:51.582 "assigned_rate_limits": { 00:03:51.582 "rw_ios_per_sec": 0, 00:03:51.582 "rw_mbytes_per_sec": 0, 00:03:51.582 "r_mbytes_per_sec": 0, 00:03:51.582 "w_mbytes_per_sec": 0 00:03:51.582 }, 00:03:51.582 "claimed": true, 00:03:51.582 "claim_type": "exclusive_write", 00:03:51.582 "zoned": false, 00:03:51.582 "supported_io_types": { 00:03:51.582 "read": true, 00:03:51.582 "write": true, 00:03:51.582 "unmap": true, 00:03:51.582 "flush": true, 00:03:51.582 "reset": true, 00:03:51.582 "nvme_admin": false, 00:03:51.582 "nvme_io": false, 00:03:51.582 "nvme_io_md": false, 00:03:51.582 "write_zeroes": true, 00:03:51.582 "zcopy": true, 00:03:51.582 "get_zone_info": false, 00:03:51.582 "zone_management": false, 00:03:51.582 "zone_append": false, 00:03:51.582 "compare": false, 00:03:51.582 "compare_and_write": false, 00:03:51.582 "abort": true, 00:03:51.582 "seek_hole": false, 00:03:51.582 "seek_data": false, 00:03:51.582 "copy": true, 00:03:51.582 "nvme_iov_md": false 00:03:51.582 }, 00:03:51.582 "memory_domains": [ 00:03:51.582 { 00:03:51.582 "dma_device_id": "system", 00:03:51.582 "dma_device_type": 1 00:03:51.582 }, 00:03:51.582 { 00:03:51.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.582 "dma_device_type": 2 00:03:51.582 } 00:03:51.582 ], 00:03:51.582 "driver_specific": {} 00:03:51.582 }, 00:03:51.582 { 00:03:51.582 "name": "Passthru0", 00:03:51.582 "aliases": [ 00:03:51.582 "c515e093-c66f-51d6-b8b5-bef3db2ce436" 00:03:51.582 ], 00:03:51.582 "product_name": "passthru", 00:03:51.582 "block_size": 512, 00:03:51.582 "num_blocks": 16384, 00:03:51.582 "uuid": "c515e093-c66f-51d6-b8b5-bef3db2ce436", 00:03:51.582 "assigned_rate_limits": { 00:03:51.582 "rw_ios_per_sec": 0, 00:03:51.582 "rw_mbytes_per_sec": 0, 00:03:51.582 "r_mbytes_per_sec": 0, 00:03:51.582 "w_mbytes_per_sec": 0 00:03:51.582 }, 00:03:51.582 "claimed": false, 00:03:51.582 "zoned": false, 00:03:51.582 "supported_io_types": { 00:03:51.582 "read": true, 00:03:51.582 "write": true, 00:03:51.582 "unmap": true, 00:03:51.582 "flush": true, 00:03:51.582 "reset": true, 00:03:51.582 "nvme_admin": false, 00:03:51.582 "nvme_io": false, 00:03:51.582 "nvme_io_md": false, 00:03:51.582 "write_zeroes": true, 00:03:51.582 "zcopy": true, 00:03:51.582 "get_zone_info": false, 00:03:51.582 "zone_management": false, 00:03:51.582 "zone_append": false, 00:03:51.582 "compare": false, 00:03:51.582 "compare_and_write": false, 00:03:51.582 "abort": true, 00:03:51.582 "seek_hole": false, 00:03:51.582 "seek_data": false, 00:03:51.582 "copy": true, 00:03:51.582 "nvme_iov_md": false 00:03:51.582 }, 00:03:51.582 "memory_domains": [ 00:03:51.582 { 00:03:51.582 "dma_device_id": "system", 00:03:51.582 "dma_device_type": 1 00:03:51.582 }, 00:03:51.582 { 00:03:51.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:51.582 "dma_device_type": 2 00:03:51.582 } 00:03:51.582 ], 00:03:51.582 "driver_specific": { 00:03:51.582 "passthru": { 00:03:51.582 "name": "Passthru0", 00:03:51.582 "base_bdev_name": "Malloc2" 00:03:51.582 } 00:03:51.582 } 00:03:51.582 } 00:03:51.582 ]' 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:51.582 00:03:51.582 real 0m0.280s 00:03:51.582 user 0m0.188s 00:03:51.582 sys 0m0.031s 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:51.582 14:44:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:51.582 ************************************ 00:03:51.582 END TEST rpc_daemon_integrity 00:03:51.582 ************************************ 00:03:51.841 14:44:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:51.841 14:44:44 rpc -- rpc/rpc.sh@84 -- # killprocess 2915265 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@954 -- # '[' -z 2915265 ']' 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@958 -- # kill -0 2915265 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@959 -- # uname 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2915265 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.841 14:44:44 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2915265' 00:03:51.842 killing process with pid 2915265 00:03:51.842 14:44:44 rpc -- common/autotest_common.sh@973 -- # kill 2915265 00:03:51.842 14:44:44 rpc -- common/autotest_common.sh@978 -- # wait 2915265 00:03:52.102 00:03:52.102 real 0m2.109s 00:03:52.102 user 0m2.681s 00:03:52.102 sys 0m0.687s 00:03:52.102 14:44:44 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.102 14:44:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.102 ************************************ 00:03:52.102 END TEST rpc 00:03:52.102 ************************************ 00:03:52.102 14:44:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:03:52.102 14:44:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.102 14:44:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.102 14:44:45 -- common/autotest_common.sh@10 -- # set +x 00:03:52.102 ************************************ 00:03:52.102 START TEST skip_rpc 00:03:52.102 ************************************ 00:03:52.102 14:44:45 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:03:52.361 * Looking for test storage... 00:03:52.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.361 14:44:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.361 --rc genhtml_branch_coverage=1 00:03:52.361 --rc genhtml_function_coverage=1 00:03:52.361 --rc genhtml_legend=1 00:03:52.361 --rc geninfo_all_blocks=1 00:03:52.361 --rc geninfo_unexecuted_blocks=1 00:03:52.361 00:03:52.361 ' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.361 --rc genhtml_branch_coverage=1 00:03:52.361 --rc genhtml_function_coverage=1 00:03:52.361 --rc genhtml_legend=1 00:03:52.361 --rc geninfo_all_blocks=1 00:03:52.361 --rc geninfo_unexecuted_blocks=1 00:03:52.361 00:03:52.361 ' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.361 --rc genhtml_branch_coverage=1 00:03:52.361 --rc genhtml_function_coverage=1 00:03:52.361 --rc genhtml_legend=1 00:03:52.361 --rc geninfo_all_blocks=1 00:03:52.361 --rc geninfo_unexecuted_blocks=1 00:03:52.361 00:03:52.361 ' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.361 --rc genhtml_branch_coverage=1 00:03:52.361 --rc genhtml_function_coverage=1 00:03:52.361 --rc genhtml_legend=1 00:03:52.361 --rc geninfo_all_blocks=1 00:03:52.361 --rc geninfo_unexecuted_blocks=1 00:03:52.361 00:03:52.361 ' 00:03:52.361 14:44:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:03:52.361 14:44:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:03:52.361 14:44:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.361 14:44:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:52.361 ************************************ 00:03:52.361 START TEST skip_rpc 00:03:52.361 ************************************ 00:03:52.361 14:44:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:52.361 14:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2915900 00:03:52.361 14:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:52.361 14:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:52.361 14:44:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:52.361 [2024-12-11 14:44:45.330664] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:03:52.361 [2024-12-11 14:44:45.330702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915900 ] 00:03:52.361 [2024-12-11 14:44:45.404663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.620 [2024-12-11 14:44:45.444073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2915900 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2915900 ']' 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2915900 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2915900 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2915900' 00:03:57.892 killing process with pid 2915900 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2915900 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2915900 00:03:57.892 00:03:57.892 real 0m5.368s 00:03:57.892 user 0m5.131s 00:03:57.892 sys 0m0.275s 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.892 14:44:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.892 ************************************ 00:03:57.892 END TEST skip_rpc 00:03:57.892 ************************************ 00:03:57.892 14:44:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:57.892 14:44:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.892 14:44:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.892 14:44:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.892 ************************************ 00:03:57.892 START TEST skip_rpc_with_json 00:03:57.892 ************************************ 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2916846 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2916846 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2916846 ']' 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:57.892 14:44:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:57.892 [2024-12-11 14:44:50.769228] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:03:57.892 [2024-12-11 14:44:50.769270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916846 ] 00:03:57.892 [2024-12-11 14:44:50.844142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.892 [2024-12-11 14:44:50.880562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.150 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:58.150 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:58.150 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:58.150 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.150 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.150 [2024-12-11 14:44:51.106908] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:58.150 request: 00:03:58.150 { 00:03:58.150 "trtype": "tcp", 00:03:58.150 "method": "nvmf_get_transports", 00:03:58.150 "req_id": 1 00:03:58.150 } 00:03:58.150 Got JSON-RPC error response 00:03:58.151 response: 00:03:58.151 { 00:03:58.151 "code": -19, 00:03:58.151 "message": "No such device" 00:03:58.151 } 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.151 [2024-12-11 14:44:51.119019] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:58.151 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:58.409 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:58.409 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:03:58.409 { 00:03:58.409 "subsystems": [ 00:03:58.409 { 00:03:58.409 "subsystem": "fsdev", 00:03:58.409 "config": [ 00:03:58.409 { 00:03:58.409 "method": "fsdev_set_opts", 00:03:58.409 "params": { 00:03:58.409 "fsdev_io_pool_size": 65535, 00:03:58.409 "fsdev_io_cache_size": 256 00:03:58.409 } 00:03:58.409 } 00:03:58.409 ] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "vfio_user_target", 00:03:58.409 "config": null 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "keyring", 00:03:58.409 "config": [] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "iobuf", 00:03:58.409 "config": [ 00:03:58.409 { 00:03:58.409 "method": "iobuf_set_options", 00:03:58.409 "params": { 00:03:58.409 "small_pool_count": 8192, 00:03:58.409 "large_pool_count": 1024, 00:03:58.409 "small_bufsize": 8192, 00:03:58.409 "large_bufsize": 135168, 00:03:58.409 "enable_numa": false 00:03:58.409 } 00:03:58.409 } 00:03:58.409 ] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "sock", 00:03:58.409 "config": [ 00:03:58.409 { 00:03:58.409 "method": "sock_set_default_impl", 00:03:58.409 "params": { 00:03:58.409 "impl_name": "posix" 00:03:58.409 } 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "method": "sock_impl_set_options", 00:03:58.409 "params": { 00:03:58.409 "impl_name": "ssl", 00:03:58.409 "recv_buf_size": 4096, 00:03:58.409 "send_buf_size": 4096, 00:03:58.409 "enable_recv_pipe": true, 00:03:58.409 "enable_quickack": false, 00:03:58.409 "enable_placement_id": 0, 00:03:58.409 "enable_zerocopy_send_server": true, 00:03:58.409 "enable_zerocopy_send_client": false, 00:03:58.409 "zerocopy_threshold": 0, 00:03:58.409 "tls_version": 0, 00:03:58.409 "enable_ktls": false 00:03:58.409 } 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "method": "sock_impl_set_options", 00:03:58.409 "params": { 00:03:58.409 "impl_name": "posix", 00:03:58.409 "recv_buf_size": 2097152, 00:03:58.409 "send_buf_size": 2097152, 00:03:58.409 "enable_recv_pipe": true, 00:03:58.409 "enable_quickack": false, 00:03:58.409 "enable_placement_id": 0, 00:03:58.409 "enable_zerocopy_send_server": true, 00:03:58.409 "enable_zerocopy_send_client": false, 00:03:58.409 "zerocopy_threshold": 0, 00:03:58.409 "tls_version": 0, 00:03:58.409 "enable_ktls": false 00:03:58.409 } 00:03:58.409 } 00:03:58.409 ] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "vmd", 00:03:58.409 "config": [] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "accel", 00:03:58.409 "config": [ 00:03:58.409 { 00:03:58.409 "method": "accel_set_options", 00:03:58.409 "params": { 00:03:58.409 "small_cache_size": 128, 00:03:58.409 "large_cache_size": 16, 00:03:58.409 "task_count": 2048, 00:03:58.409 "sequence_count": 2048, 00:03:58.409 "buf_count": 2048 00:03:58.409 } 00:03:58.409 } 00:03:58.409 ] 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "subsystem": "bdev", 00:03:58.409 "config": [ 00:03:58.409 { 00:03:58.409 "method": "bdev_set_options", 00:03:58.409 "params": { 00:03:58.409 "bdev_io_pool_size": 65535, 00:03:58.409 "bdev_io_cache_size": 256, 00:03:58.409 "bdev_auto_examine": true, 00:03:58.409 "iobuf_small_cache_size": 128, 00:03:58.409 "iobuf_large_cache_size": 16 00:03:58.409 } 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "method": "bdev_raid_set_options", 00:03:58.409 "params": { 00:03:58.409 "process_window_size_kb": 1024, 00:03:58.409 "process_max_bandwidth_mb_sec": 0 00:03:58.409 } 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "method": "bdev_iscsi_set_options", 00:03:58.409 "params": { 00:03:58.409 "timeout_sec": 30 00:03:58.409 } 00:03:58.409 }, 00:03:58.409 { 00:03:58.409 "method": "bdev_nvme_set_options", 00:03:58.409 "params": { 00:03:58.409 "action_on_timeout": "none", 00:03:58.409 "timeout_us": 0, 00:03:58.409 "timeout_admin_us": 0, 00:03:58.409 "keep_alive_timeout_ms": 10000, 00:03:58.409 "arbitration_burst": 0, 00:03:58.409 "low_priority_weight": 0, 00:03:58.409 "medium_priority_weight": 0, 00:03:58.409 "high_priority_weight": 0, 00:03:58.409 "nvme_adminq_poll_period_us": 10000, 00:03:58.409 "nvme_ioq_poll_period_us": 0, 00:03:58.409 "io_queue_requests": 0, 00:03:58.409 "delay_cmd_submit": true, 00:03:58.409 "transport_retry_count": 4, 00:03:58.409 "bdev_retry_count": 3, 00:03:58.409 "transport_ack_timeout": 0, 00:03:58.409 "ctrlr_loss_timeout_sec": 0, 00:03:58.409 "reconnect_delay_sec": 0, 00:03:58.409 "fast_io_fail_timeout_sec": 0, 00:03:58.410 "disable_auto_failback": false, 00:03:58.410 "generate_uuids": false, 00:03:58.410 "transport_tos": 0, 00:03:58.410 "nvme_error_stat": false, 00:03:58.410 "rdma_srq_size": 0, 00:03:58.410 "io_path_stat": false, 00:03:58.410 "allow_accel_sequence": false, 00:03:58.410 "rdma_max_cq_size": 0, 00:03:58.410 "rdma_cm_event_timeout_ms": 0, 00:03:58.410 "dhchap_digests": [ 00:03:58.410 "sha256", 00:03:58.410 "sha384", 00:03:58.410 "sha512" 00:03:58.410 ], 00:03:58.410 "dhchap_dhgroups": [ 00:03:58.410 "null", 00:03:58.410 "ffdhe2048", 00:03:58.410 "ffdhe3072", 00:03:58.410 "ffdhe4096", 00:03:58.410 "ffdhe6144", 00:03:58.410 "ffdhe8192" 00:03:58.410 ], 00:03:58.410 "rdma_umr_per_io": false 00:03:58.410 } 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "method": "bdev_nvme_set_hotplug", 00:03:58.410 "params": { 00:03:58.410 "period_us": 100000, 00:03:58.410 "enable": false 00:03:58.410 } 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "method": "bdev_wait_for_examine" 00:03:58.410 } 00:03:58.410 ] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "scsi", 00:03:58.410 "config": null 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "scheduler", 00:03:58.410 "config": [ 00:03:58.410 { 00:03:58.410 "method": "framework_set_scheduler", 00:03:58.410 "params": { 00:03:58.410 "name": "static" 00:03:58.410 } 00:03:58.410 } 00:03:58.410 ] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "vhost_scsi", 00:03:58.410 "config": [] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "vhost_blk", 00:03:58.410 "config": [] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "ublk", 00:03:58.410 "config": [] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "nbd", 00:03:58.410 "config": [] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "nvmf", 00:03:58.410 "config": [ 00:03:58.410 { 00:03:58.410 "method": "nvmf_set_config", 00:03:58.410 "params": { 00:03:58.410 "discovery_filter": "match_any", 00:03:58.410 "admin_cmd_passthru": { 00:03:58.410 "identify_ctrlr": false 00:03:58.410 }, 00:03:58.410 "dhchap_digests": [ 00:03:58.410 "sha256", 00:03:58.410 "sha384", 00:03:58.410 "sha512" 00:03:58.410 ], 00:03:58.410 "dhchap_dhgroups": [ 00:03:58.410 "null", 00:03:58.410 "ffdhe2048", 00:03:58.410 "ffdhe3072", 00:03:58.410 "ffdhe4096", 00:03:58.410 "ffdhe6144", 00:03:58.410 "ffdhe8192" 00:03:58.410 ] 00:03:58.410 } 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "method": "nvmf_set_max_subsystems", 00:03:58.410 "params": { 00:03:58.410 "max_subsystems": 1024 00:03:58.410 } 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "method": "nvmf_set_crdt", 00:03:58.410 "params": { 00:03:58.410 "crdt1": 0, 00:03:58.410 "crdt2": 0, 00:03:58.410 "crdt3": 0 00:03:58.410 } 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "method": "nvmf_create_transport", 00:03:58.410 "params": { 00:03:58.410 "trtype": "TCP", 00:03:58.410 "max_queue_depth": 128, 00:03:58.410 "max_io_qpairs_per_ctrlr": 127, 00:03:58.410 "in_capsule_data_size": 4096, 00:03:58.410 "max_io_size": 131072, 00:03:58.410 "io_unit_size": 131072, 00:03:58.410 "max_aq_depth": 128, 00:03:58.410 "num_shared_buffers": 511, 00:03:58.410 "buf_cache_size": 4294967295, 00:03:58.410 "dif_insert_or_strip": false, 00:03:58.410 "zcopy": false, 00:03:58.410 "c2h_success": true, 00:03:58.410 "sock_priority": 0, 00:03:58.410 "abort_timeout_sec": 1, 00:03:58.410 "ack_timeout": 0, 00:03:58.410 "data_wr_pool_size": 0 00:03:58.410 } 00:03:58.410 } 00:03:58.410 ] 00:03:58.410 }, 00:03:58.410 { 00:03:58.410 "subsystem": "iscsi", 00:03:58.410 "config": [ 00:03:58.410 { 00:03:58.410 "method": "iscsi_set_options", 00:03:58.410 "params": { 00:03:58.410 "node_base": "iqn.2016-06.io.spdk", 00:03:58.410 "max_sessions": 128, 00:03:58.410 "max_connections_per_session": 2, 00:03:58.410 "max_queue_depth": 64, 00:03:58.410 "default_time2wait": 2, 00:03:58.410 "default_time2retain": 20, 00:03:58.410 "first_burst_length": 8192, 00:03:58.410 "immediate_data": true, 00:03:58.410 "allow_duplicated_isid": false, 00:03:58.410 "error_recovery_level": 0, 00:03:58.410 "nop_timeout": 60, 00:03:58.410 "nop_in_interval": 30, 00:03:58.410 "disable_chap": false, 00:03:58.410 "require_chap": false, 00:03:58.410 "mutual_chap": false, 00:03:58.410 "chap_group": 0, 00:03:58.410 "max_large_datain_per_connection": 64, 00:03:58.410 "max_r2t_per_connection": 4, 00:03:58.410 "pdu_pool_size": 36864, 00:03:58.410 "immediate_data_pool_size": 16384, 00:03:58.410 "data_out_pool_size": 2048 00:03:58.410 } 00:03:58.410 } 00:03:58.410 ] 00:03:58.410 } 00:03:58.410 ] 00:03:58.410 } 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2916846 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2916846 ']' 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2916846 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2916846 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2916846' 00:03:58.410 killing process with pid 2916846 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2916846 00:03:58.410 14:44:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2916846 00:03:58.669 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2916910 00:03:58.669 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:03:58.669 14:44:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2916910 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2916910 ']' 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2916910 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2916910 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:03.939 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:03.940 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2916910' 00:04:03.940 killing process with pid 2916910 00:04:03.940 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2916910 00:04:03.940 14:44:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2916910 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:04.199 00:04:04.199 real 0m6.295s 00:04:04.199 user 0m5.998s 00:04:04.199 sys 0m0.608s 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 ************************************ 00:04:04.199 END TEST skip_rpc_with_json 00:04:04.199 ************************************ 00:04:04.199 14:44:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 ************************************ 00:04:04.199 START TEST skip_rpc_with_delay 00:04:04.199 ************************************ 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:04.199 [2024-12-11 14:44:57.139471] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.199 00:04:04.199 real 0m0.071s 00:04:04.199 user 0m0.046s 00:04:04.199 sys 0m0.024s 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:04.199 14:44:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 ************************************ 00:04:04.199 END TEST skip_rpc_with_delay 00:04:04.199 ************************************ 00:04:04.199 14:44:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:04.199 14:44:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:04.199 14:44:57 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:04.199 14:44:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.199 ************************************ 00:04:04.199 START TEST exit_on_failed_rpc_init 00:04:04.199 ************************************ 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2917958 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2917958 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2917958 ']' 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:04.199 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:04.459 [2024-12-11 14:44:57.281945] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:04.459 [2024-12-11 14:44:57.281986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917958 ] 00:04:04.459 [2024-12-11 14:44:57.359141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.459 [2024-12-11 14:44:57.400856] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:04:04.718 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:04.718 [2024-12-11 14:44:57.675107] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:04.718 [2024-12-11 14:44:57.675152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918066 ] 00:04:04.718 [2024-12-11 14:44:57.752299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.977 [2024-12-11 14:44:57.792925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:04.977 [2024-12-11 14:44:57.792974] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:04.977 [2024-12-11 14:44:57.792983] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:04.977 [2024-12-11 14:44:57.792990] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:04.977 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:04.977 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:04.977 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2917958 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2917958 ']' 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2917958 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2917958 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2917958' 00:04:04.978 killing process with pid 2917958 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2917958 00:04:04.978 14:44:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2917958 00:04:05.236 00:04:05.236 real 0m0.959s 00:04:05.236 user 0m1.019s 00:04:05.236 sys 0m0.395s 00:04:05.236 14:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.236 14:44:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:05.236 ************************************ 00:04:05.236 END TEST exit_on_failed_rpc_init 00:04:05.236 ************************************ 00:04:05.236 14:44:58 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:05.236 00:04:05.236 real 0m13.154s 00:04:05.236 user 0m12.394s 00:04:05.236 sys 0m1.595s 00:04:05.236 14:44:58 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.236 14:44:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.236 ************************************ 00:04:05.236 END TEST skip_rpc 00:04:05.236 ************************************ 00:04:05.236 14:44:58 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:04:05.237 14:44:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.237 14:44:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.237 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.495 ************************************ 00:04:05.495 START TEST rpc_client 00:04:05.495 ************************************ 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:04:05.495 * Looking for test storage... 00:04:05.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.495 14:44:58 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.495 14:44:58 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.495 --rc genhtml_branch_coverage=1 00:04:05.495 --rc genhtml_function_coverage=1 00:04:05.495 --rc genhtml_legend=1 00:04:05.496 --rc geninfo_all_blocks=1 00:04:05.496 --rc geninfo_unexecuted_blocks=1 00:04:05.496 00:04:05.496 ' 00:04:05.496 14:44:58 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.496 --rc genhtml_branch_coverage=1 00:04:05.496 --rc genhtml_function_coverage=1 00:04:05.496 --rc genhtml_legend=1 00:04:05.496 --rc geninfo_all_blocks=1 00:04:05.496 --rc geninfo_unexecuted_blocks=1 00:04:05.496 00:04:05.496 ' 00:04:05.496 14:44:58 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.496 --rc genhtml_branch_coverage=1 00:04:05.496 --rc genhtml_function_coverage=1 00:04:05.496 --rc genhtml_legend=1 00:04:05.496 --rc geninfo_all_blocks=1 00:04:05.496 --rc geninfo_unexecuted_blocks=1 00:04:05.496 00:04:05.496 ' 00:04:05.496 14:44:58 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.496 --rc genhtml_branch_coverage=1 00:04:05.496 --rc genhtml_function_coverage=1 00:04:05.496 --rc genhtml_legend=1 00:04:05.496 --rc geninfo_all_blocks=1 00:04:05.496 --rc geninfo_unexecuted_blocks=1 00:04:05.496 00:04:05.496 ' 00:04:05.496 14:44:58 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client_test 00:04:05.496 OK 00:04:05.496 14:44:58 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:05.496 00:04:05.496 real 0m0.203s 00:04:05.496 user 0m0.118s 00:04:05.496 sys 0m0.098s 00:04:05.496 14:44:58 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.496 14:44:58 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:05.496 ************************************ 00:04:05.496 END TEST rpc_client 00:04:05.496 ************************************ 00:04:05.496 14:44:58 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:04:05.496 14:44:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:05.496 14:44:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:05.496 14:44:58 -- common/autotest_common.sh@10 -- # set +x 00:04:05.755 ************************************ 00:04:05.755 START TEST json_config 00:04:05.755 ************************************ 00:04:05.755 14:44:58 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:04:05.755 14:44:58 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:05.755 14:44:58 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:05.755 14:44:58 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:05.755 14:44:58 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:05.755 14:44:58 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:05.755 14:44:58 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:05.755 14:44:58 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:05.755 14:44:58 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:05.755 14:44:58 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:05.755 14:44:58 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:05.755 14:44:58 json_config -- scripts/common.sh@345 -- # : 1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:05.755 14:44:58 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:05.755 14:44:58 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@353 -- # local d=1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:05.755 14:44:58 json_config -- scripts/common.sh@355 -- # echo 1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:05.755 14:44:58 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@353 -- # local d=2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:05.755 14:44:58 json_config -- scripts/common.sh@355 -- # echo 2 00:04:05.755 14:44:58 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:05.756 14:44:58 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:05.756 14:44:58 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:05.756 14:44:58 json_config -- scripts/common.sh@368 -- # return 0 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:05.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:05.756 --rc genhtml_branch_coverage=1 00:04:05.756 --rc genhtml_function_coverage=1 00:04:05.756 --rc genhtml_legend=1 00:04:05.756 --rc geninfo_all_blocks=1 00:04:05.756 --rc geninfo_unexecuted_blocks=1 00:04:05.756 00:04:05.756 ' 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:04:05.756 14:44:58 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:05.756 14:44:58 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.756 14:44:58 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.756 14:44:58 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.756 14:44:58 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 14:44:58 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 14:44:58 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 14:44:58 json_config -- paths/export.sh@5 -- # export PATH 00:04:05.756 14:44:58 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@51 -- # : 0 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:05.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:05.756 14:44:58 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json') 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:05.756 INFO: JSON configuration test init 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.756 14:44:58 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:05.756 14:44:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:05.756 14:44:58 json_config -- json_config/common.sh@10 -- # shift 00:04:05.756 14:44:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:05.756 14:44:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:05.756 14:44:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:05.756 14:44:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.756 14:44:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.756 14:44:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2918409 00:04:05.756 14:44:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:05.756 Waiting for target to run... 00:04:05.756 14:44:58 json_config -- json_config/common.sh@25 -- # waitforlisten 2918409 /var/tmp/spdk_tgt.sock 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 2918409 ']' 00:04:05.756 14:44:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:05.756 14:44:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:05.757 14:44:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:05.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:05.757 14:44:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:05.757 14:44:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.015 [2024-12-11 14:44:58.808447] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:06.015 [2024-12-11 14:44:58.808500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918409 ] 00:04:06.274 [2024-12-11 14:44:59.263091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.274 [2024-12-11 14:44:59.318669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:06.841 14:44:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:06.841 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.841 14:44:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:06.841 14:44:59 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:06.841 14:44:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:10.131 14:45:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.131 14:45:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:10.131 14:45:02 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:10.132 14:45:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@54 -- # sort 00:04:10.132 14:45:02 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:10.132 14:45:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:10.132 14:45:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:10.132 14:45:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:10.132 14:45:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:10.132 14:45:03 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.132 14:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:10.391 MallocForNvmf0 00:04:10.391 14:45:03 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.391 14:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:10.391 MallocForNvmf1 00:04:10.650 14:45:03 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.650 14:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:10.650 [2024-12-11 14:45:03.613422] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.650 14:45:03 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.650 14:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:10.909 14:45:03 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:10.909 14:45:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:11.168 14:45:04 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.168 14:45:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:11.428 14:45:04 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.428 14:45:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:11.428 [2024-12-11 14:45:04.415953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:11.428 14:45:04 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:11.428 14:45:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.428 14:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.687 14:45:04 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:11.687 14:45:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.687 14:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.687 14:45:04 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:11.687 14:45:04 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.687 14:45:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:11.687 MallocBdevForConfigChangeCheck 00:04:11.687 14:45:04 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:11.687 14:45:04 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:11.687 14:45:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:11.946 14:45:04 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:11.946 14:45:04 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.208 14:45:05 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:12.208 INFO: shutting down applications... 00:04:12.208 14:45:05 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:12.208 14:45:05 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:12.209 14:45:05 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:12.209 14:45:05 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:14.116 Calling clear_iscsi_subsystem 00:04:14.116 Calling clear_nvmf_subsystem 00:04:14.116 Calling clear_nbd_subsystem 00:04:14.116 Calling clear_ublk_subsystem 00:04:14.116 Calling clear_vhost_blk_subsystem 00:04:14.116 Calling clear_vhost_scsi_subsystem 00:04:14.116 Calling clear_bdev_subsystem 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:14.116 14:45:06 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method check_empty 00:04:14.116 14:45:07 json_config -- json_config/json_config.sh@352 -- # break 00:04:14.116 14:45:07 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:14.116 14:45:07 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:14.116 14:45:07 json_config -- json_config/common.sh@31 -- # local app=target 00:04:14.116 14:45:07 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.116 14:45:07 json_config -- json_config/common.sh@35 -- # [[ -n 2918409 ]] 00:04:14.116 14:45:07 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2918409 00:04:14.116 14:45:07 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.116 14:45:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.116 14:45:07 json_config -- json_config/common.sh@41 -- # kill -0 2918409 00:04:14.116 14:45:07 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:14.685 14:45:07 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:14.685 14:45:07 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.685 14:45:07 json_config -- json_config/common.sh@41 -- # kill -0 2918409 00:04:14.685 14:45:07 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:14.685 14:45:07 json_config -- json_config/common.sh@43 -- # break 00:04:14.685 14:45:07 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:14.685 14:45:07 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:14.685 SPDK target shutdown done 00:04:14.685 14:45:07 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:14.685 INFO: relaunching applications... 00:04:14.685 14:45:07 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:14.685 14:45:07 json_config -- json_config/common.sh@9 -- # local app=target 00:04:14.685 14:45:07 json_config -- json_config/common.sh@10 -- # shift 00:04:14.685 14:45:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.685 14:45:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.685 14:45:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.685 14:45:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.685 14:45:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.685 14:45:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2920065 00:04:14.685 14:45:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.685 Waiting for target to run... 00:04:14.685 14:45:07 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:14.685 14:45:07 json_config -- json_config/common.sh@25 -- # waitforlisten 2920065 /var/tmp/spdk_tgt.sock 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@835 -- # '[' -z 2920065 ']' 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.685 14:45:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.685 [2024-12-11 14:45:07.662892] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:14.685 [2024-12-11 14:45:07.662951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920065 ] 00:04:14.945 [2024-12-11 14:45:07.952996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.945 [2024-12-11 14:45:07.986550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.236 [2024-12-11 14:45:11.024138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.236 [2024-12-11 14:45:11.056534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:18.236 14:45:11 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:18.236 14:45:11 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:18.236 14:45:11 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.236 00:04:18.236 14:45:11 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:18.236 14:45:11 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:18.236 INFO: Checking if target configuration is the same... 00:04:18.236 14:45:11 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:18.236 14:45:11 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:18.236 14:45:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.236 + '[' 2 -ne 2 ']' 00:04:18.236 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:04:18.236 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:04:18.236 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:04:18.236 +++ basename /dev/fd/62 00:04:18.236 ++ mktemp /tmp/62.XXX 00:04:18.236 + tmp_file_1=/tmp/62.G2K 00:04:18.236 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:18.236 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.236 + tmp_file_2=/tmp/spdk_tgt_config.json.ekc 00:04:18.236 + ret=0 00:04:18.236 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:04:18.496 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:04:18.496 + diff -u /tmp/62.G2K /tmp/spdk_tgt_config.json.ekc 00:04:18.496 + echo 'INFO: JSON config files are the same' 00:04:18.496 INFO: JSON config files are the same 00:04:18.496 + rm /tmp/62.G2K /tmp/spdk_tgt_config.json.ekc 00:04:18.496 + exit 0 00:04:18.496 14:45:11 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:18.496 14:45:11 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:18.496 INFO: changing configuration and checking if this can be detected... 00:04:18.496 14:45:11 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.496 14:45:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:18.755 14:45:11 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:18.755 14:45:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:18.755 14:45:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:18.755 + '[' 2 -ne 2 ']' 00:04:18.755 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:04:18.755 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:04:18.755 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:04:18.755 +++ basename /dev/fd/62 00:04:18.755 ++ mktemp /tmp/62.XXX 00:04:18.755 + tmp_file_1=/tmp/62.Br4 00:04:18.755 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:18.755 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:18.755 + tmp_file_2=/tmp/spdk_tgt_config.json.gZs 00:04:18.755 + ret=0 00:04:18.755 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:04:19.016 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:04:19.276 + diff -u /tmp/62.Br4 /tmp/spdk_tgt_config.json.gZs 00:04:19.276 + ret=1 00:04:19.276 + echo '=== Start of file: /tmp/62.Br4 ===' 00:04:19.276 + cat /tmp/62.Br4 00:04:19.276 + echo '=== End of file: /tmp/62.Br4 ===' 00:04:19.276 + echo '' 00:04:19.276 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gZs ===' 00:04:19.276 + cat /tmp/spdk_tgt_config.json.gZs 00:04:19.276 + echo '=== End of file: /tmp/spdk_tgt_config.json.gZs ===' 00:04:19.276 + echo '' 00:04:19.276 + rm /tmp/62.Br4 /tmp/spdk_tgt_config.json.gZs 00:04:19.276 + exit 1 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:19.276 INFO: configuration change detected. 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@324 -- # [[ -n 2920065 ]] 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.276 14:45:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.276 14:45:12 json_config -- json_config/json_config.sh@330 -- # killprocess 2920065 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@954 -- # '[' -z 2920065 ']' 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@958 -- # kill -0 2920065 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@959 -- # uname 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2920065 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2920065' 00:04:19.277 killing process with pid 2920065 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@973 -- # kill 2920065 00:04:19.277 14:45:12 json_config -- common/autotest_common.sh@978 -- # wait 2920065 00:04:21.219 14:45:13 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:21.219 14:45:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:21.219 14:45:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.219 14:45:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.219 14:45:13 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:21.219 14:45:13 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:21.219 INFO: Success 00:04:21.219 00:04:21.219 real 0m15.196s 00:04:21.219 user 0m15.714s 00:04:21.219 sys 0m2.559s 00:04:21.219 14:45:13 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:21.219 14:45:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:21.219 ************************************ 00:04:21.219 END TEST json_config 00:04:21.219 ************************************ 00:04:21.219 14:45:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:04:21.219 14:45:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:21.219 14:45:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:21.219 14:45:13 -- common/autotest_common.sh@10 -- # set +x 00:04:21.219 ************************************ 00:04:21.219 START TEST json_config_extra_key 00:04:21.219 ************************************ 00:04:21.219 14:45:13 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:04:21.219 14:45:13 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:21.219 14:45:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:21.219 14:45:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:21.219 14:45:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.219 14:45:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:21.220 14:45:13 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.220 14:45:13 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:21.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.220 --rc genhtml_branch_coverage=1 00:04:21.220 --rc genhtml_function_coverage=1 00:04:21.220 --rc genhtml_legend=1 00:04:21.220 --rc geninfo_all_blocks=1 00:04:21.220 --rc geninfo_unexecuted_blocks=1 00:04:21.220 00:04:21.220 ' 00:04:21.220 14:45:13 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:21.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.220 --rc genhtml_branch_coverage=1 00:04:21.220 --rc genhtml_function_coverage=1 00:04:21.220 --rc genhtml_legend=1 00:04:21.220 --rc geninfo_all_blocks=1 00:04:21.220 --rc geninfo_unexecuted_blocks=1 00:04:21.220 00:04:21.220 ' 00:04:21.220 14:45:13 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:21.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.220 --rc genhtml_branch_coverage=1 00:04:21.220 --rc genhtml_function_coverage=1 00:04:21.220 --rc genhtml_legend=1 00:04:21.220 --rc geninfo_all_blocks=1 00:04:21.220 --rc geninfo_unexecuted_blocks=1 00:04:21.220 00:04:21.220 ' 00:04:21.220 14:45:13 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:21.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.220 --rc genhtml_branch_coverage=1 00:04:21.220 --rc genhtml_function_coverage=1 00:04:21.220 --rc genhtml_legend=1 00:04:21.220 --rc geninfo_all_blocks=1 00:04:21.220 --rc geninfo_unexecuted_blocks=1 00:04:21.220 00:04:21.220 ' 00:04:21.220 14:45:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:21.220 14:45:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:21.220 14:45:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:21.220 14:45:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.220 14:45:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.220 14:45:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.220 14:45:14 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:21.220 14:45:14 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:21.220 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:21.220 14:45:14 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json') 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:21.220 INFO: launching applications... 00:04:21.220 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2921721 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:21.220 Waiting for target to run... 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2921721 /var/tmp/spdk_tgt.sock 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2921721 ']' 00:04:21.220 14:45:14 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:21.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:21.220 14:45:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:21.220 [2024-12-11 14:45:14.062190] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:21.221 [2024-12-11 14:45:14.062241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921721 ] 00:04:21.525 [2024-12-11 14:45:14.347991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.525 [2024-12-11 14:45:14.381058] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.099 14:45:14 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.099 14:45:14 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:22.099 00:04:22.099 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:22.099 INFO: shutting down applications... 00:04:22.099 14:45:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2921721 ]] 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2921721 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2921721 00:04:22.099 14:45:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2921721 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.358 14:45:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.358 SPDK target shutdown done 00:04:22.358 14:45:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:22.358 Success 00:04:22.358 00:04:22.358 real 0m1.577s 00:04:22.358 user 0m1.370s 00:04:22.358 sys 0m0.397s 00:04:22.358 14:45:15 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.358 14:45:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:22.358 ************************************ 00:04:22.358 END TEST json_config_extra_key 00:04:22.358 ************************************ 00:04:22.617 14:45:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.617 14:45:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.617 14:45:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.617 14:45:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.617 ************************************ 00:04:22.617 START TEST alias_rpc 00:04:22.617 ************************************ 00:04:22.617 14:45:15 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:22.617 * Looking for test storage... 00:04:22.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc 00:04:22.617 14:45:15 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:22.617 14:45:15 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:22.617 14:45:15 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:22.617 14:45:15 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:22.617 14:45:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:22.618 14:45:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:22.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.618 --rc genhtml_branch_coverage=1 00:04:22.618 --rc genhtml_function_coverage=1 00:04:22.618 --rc genhtml_legend=1 00:04:22.618 --rc geninfo_all_blocks=1 00:04:22.618 --rc geninfo_unexecuted_blocks=1 00:04:22.618 00:04:22.618 ' 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:22.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.618 --rc genhtml_branch_coverage=1 00:04:22.618 --rc genhtml_function_coverage=1 00:04:22.618 --rc genhtml_legend=1 00:04:22.618 --rc geninfo_all_blocks=1 00:04:22.618 --rc geninfo_unexecuted_blocks=1 00:04:22.618 00:04:22.618 ' 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:22.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.618 --rc genhtml_branch_coverage=1 00:04:22.618 --rc genhtml_function_coverage=1 00:04:22.618 --rc genhtml_legend=1 00:04:22.618 --rc geninfo_all_blocks=1 00:04:22.618 --rc geninfo_unexecuted_blocks=1 00:04:22.618 00:04:22.618 ' 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:22.618 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:22.618 --rc genhtml_branch_coverage=1 00:04:22.618 --rc genhtml_function_coverage=1 00:04:22.618 --rc genhtml_legend=1 00:04:22.618 --rc geninfo_all_blocks=1 00:04:22.618 --rc geninfo_unexecuted_blocks=1 00:04:22.618 00:04:22.618 ' 00:04:22.618 14:45:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:22.618 14:45:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2922022 00:04:22.618 14:45:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2922022 00:04:22.618 14:45:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2922022 ']' 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.618 14:45:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.877 [2024-12-11 14:45:15.696188] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:22.877 [2024-12-11 14:45:15.696238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922022 ] 00:04:22.878 [2024-12-11 14:45:15.768991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.878 [2024-12-11 14:45:15.807901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.137 14:45:16 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.137 14:45:16 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:23.137 14:45:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_config -i 00:04:23.396 14:45:16 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2922022 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2922022 ']' 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2922022 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2922022 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2922022' 00:04:23.396 killing process with pid 2922022 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@973 -- # kill 2922022 00:04:23.396 14:45:16 alias_rpc -- common/autotest_common.sh@978 -- # wait 2922022 00:04:23.656 00:04:23.656 real 0m1.136s 00:04:23.656 user 0m1.161s 00:04:23.656 sys 0m0.411s 00:04:23.656 14:45:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.656 14:45:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.656 ************************************ 00:04:23.656 END TEST alias_rpc 00:04:23.656 ************************************ 00:04:23.656 14:45:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:23.656 14:45:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:04:23.656 14:45:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.656 14:45:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.656 14:45:16 -- common/autotest_common.sh@10 -- # set +x 00:04:23.656 ************************************ 00:04:23.656 START TEST spdkcli_tcp 00:04:23.656 ************************************ 00:04:23.656 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:04:23.917 * Looking for test storage... 00:04:23.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.917 14:45:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:23.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.917 --rc genhtml_branch_coverage=1 00:04:23.917 --rc genhtml_function_coverage=1 00:04:23.917 --rc genhtml_legend=1 00:04:23.917 --rc geninfo_all_blocks=1 00:04:23.917 --rc geninfo_unexecuted_blocks=1 00:04:23.917 00:04:23.917 ' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:23.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.917 --rc genhtml_branch_coverage=1 00:04:23.917 --rc genhtml_function_coverage=1 00:04:23.917 --rc genhtml_legend=1 00:04:23.917 --rc geninfo_all_blocks=1 00:04:23.917 --rc geninfo_unexecuted_blocks=1 00:04:23.917 00:04:23.917 ' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:23.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.917 --rc genhtml_branch_coverage=1 00:04:23.917 --rc genhtml_function_coverage=1 00:04:23.917 --rc genhtml_legend=1 00:04:23.917 --rc geninfo_all_blocks=1 00:04:23.917 --rc geninfo_unexecuted_blocks=1 00:04:23.917 00:04:23.917 ' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:23.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.917 --rc genhtml_branch_coverage=1 00:04:23.917 --rc genhtml_function_coverage=1 00:04:23.917 --rc genhtml_legend=1 00:04:23.917 --rc geninfo_all_blocks=1 00:04:23.917 --rc geninfo_unexecuted_blocks=1 00:04:23.917 00:04:23.917 ' 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2922311 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2922311 00:04:23.917 14:45:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2922311 ']' 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.917 14:45:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:23.917 [2024-12-11 14:45:16.914126] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:23.917 [2024-12-11 14:45:16.914181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922311 ] 00:04:24.177 [2024-12-11 14:45:16.990586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.177 [2024-12-11 14:45:17.033080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.177 [2024-12-11 14:45:17.033083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.437 14:45:17 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.437 14:45:17 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:24.437 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2922318 00:04:24.437 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:24.437 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:24.437 [ 00:04:24.437 "bdev_malloc_delete", 00:04:24.437 "bdev_malloc_create", 00:04:24.437 "bdev_null_resize", 00:04:24.437 "bdev_null_delete", 00:04:24.437 "bdev_null_create", 00:04:24.437 "bdev_nvme_cuse_unregister", 00:04:24.437 "bdev_nvme_cuse_register", 00:04:24.437 "bdev_opal_new_user", 00:04:24.437 "bdev_opal_set_lock_state", 00:04:24.437 "bdev_opal_delete", 00:04:24.437 "bdev_opal_get_info", 00:04:24.437 "bdev_opal_create", 00:04:24.437 "bdev_nvme_opal_revert", 00:04:24.437 "bdev_nvme_opal_init", 00:04:24.437 "bdev_nvme_send_cmd", 00:04:24.437 "bdev_nvme_set_keys", 00:04:24.437 "bdev_nvme_get_path_iostat", 00:04:24.437 "bdev_nvme_get_mdns_discovery_info", 00:04:24.437 "bdev_nvme_stop_mdns_discovery", 00:04:24.437 "bdev_nvme_start_mdns_discovery", 00:04:24.437 "bdev_nvme_set_multipath_policy", 00:04:24.437 "bdev_nvme_set_preferred_path", 00:04:24.437 "bdev_nvme_get_io_paths", 00:04:24.437 "bdev_nvme_remove_error_injection", 00:04:24.437 "bdev_nvme_add_error_injection", 00:04:24.437 "bdev_nvme_get_discovery_info", 00:04:24.437 "bdev_nvme_stop_discovery", 00:04:24.437 "bdev_nvme_start_discovery", 00:04:24.437 "bdev_nvme_get_controller_health_info", 00:04:24.437 "bdev_nvme_disable_controller", 00:04:24.437 "bdev_nvme_enable_controller", 00:04:24.437 "bdev_nvme_reset_controller", 00:04:24.437 "bdev_nvme_get_transport_statistics", 00:04:24.437 "bdev_nvme_apply_firmware", 00:04:24.437 "bdev_nvme_detach_controller", 00:04:24.437 "bdev_nvme_get_controllers", 00:04:24.437 "bdev_nvme_attach_controller", 00:04:24.437 "bdev_nvme_set_hotplug", 00:04:24.437 "bdev_nvme_set_options", 00:04:24.437 "bdev_passthru_delete", 00:04:24.437 "bdev_passthru_create", 00:04:24.437 "bdev_lvol_set_parent_bdev", 00:04:24.437 "bdev_lvol_set_parent", 00:04:24.437 "bdev_lvol_check_shallow_copy", 00:04:24.437 "bdev_lvol_start_shallow_copy", 00:04:24.437 "bdev_lvol_grow_lvstore", 00:04:24.437 "bdev_lvol_get_lvols", 00:04:24.437 "bdev_lvol_get_lvstores", 00:04:24.437 "bdev_lvol_delete", 00:04:24.437 "bdev_lvol_set_read_only", 00:04:24.437 "bdev_lvol_resize", 00:04:24.437 "bdev_lvol_decouple_parent", 00:04:24.437 "bdev_lvol_inflate", 00:04:24.437 "bdev_lvol_rename", 00:04:24.437 "bdev_lvol_clone_bdev", 00:04:24.437 "bdev_lvol_clone", 00:04:24.437 "bdev_lvol_snapshot", 00:04:24.437 "bdev_lvol_create", 00:04:24.437 "bdev_lvol_delete_lvstore", 00:04:24.437 "bdev_lvol_rename_lvstore", 00:04:24.437 "bdev_lvol_create_lvstore", 00:04:24.437 "bdev_raid_set_options", 00:04:24.437 "bdev_raid_remove_base_bdev", 00:04:24.437 "bdev_raid_add_base_bdev", 00:04:24.437 "bdev_raid_delete", 00:04:24.437 "bdev_raid_create", 00:04:24.437 "bdev_raid_get_bdevs", 00:04:24.437 "bdev_error_inject_error", 00:04:24.437 "bdev_error_delete", 00:04:24.437 "bdev_error_create", 00:04:24.437 "bdev_split_delete", 00:04:24.437 "bdev_split_create", 00:04:24.437 "bdev_delay_delete", 00:04:24.437 "bdev_delay_create", 00:04:24.437 "bdev_delay_update_latency", 00:04:24.437 "bdev_zone_block_delete", 00:04:24.437 "bdev_zone_block_create", 00:04:24.437 "blobfs_create", 00:04:24.437 "blobfs_detect", 00:04:24.437 "blobfs_set_cache_size", 00:04:24.437 "bdev_aio_delete", 00:04:24.437 "bdev_aio_rescan", 00:04:24.437 "bdev_aio_create", 00:04:24.437 "bdev_ftl_set_property", 00:04:24.437 "bdev_ftl_get_properties", 00:04:24.437 "bdev_ftl_get_stats", 00:04:24.437 "bdev_ftl_unmap", 00:04:24.437 "bdev_ftl_unload", 00:04:24.437 "bdev_ftl_delete", 00:04:24.437 "bdev_ftl_load", 00:04:24.437 "bdev_ftl_create", 00:04:24.437 "bdev_virtio_attach_controller", 00:04:24.437 "bdev_virtio_scsi_get_devices", 00:04:24.437 "bdev_virtio_detach_controller", 00:04:24.437 "bdev_virtio_blk_set_hotplug", 00:04:24.437 "bdev_iscsi_delete", 00:04:24.437 "bdev_iscsi_create", 00:04:24.437 "bdev_iscsi_set_options", 00:04:24.437 "accel_error_inject_error", 00:04:24.437 "ioat_scan_accel_module", 00:04:24.437 "dsa_scan_accel_module", 00:04:24.437 "iaa_scan_accel_module", 00:04:24.437 "vfu_virtio_create_fs_endpoint", 00:04:24.437 "vfu_virtio_create_scsi_endpoint", 00:04:24.437 "vfu_virtio_scsi_remove_target", 00:04:24.437 "vfu_virtio_scsi_add_target", 00:04:24.437 "vfu_virtio_create_blk_endpoint", 00:04:24.437 "vfu_virtio_delete_endpoint", 00:04:24.437 "keyring_file_remove_key", 00:04:24.437 "keyring_file_add_key", 00:04:24.437 "keyring_linux_set_options", 00:04:24.437 "fsdev_aio_delete", 00:04:24.437 "fsdev_aio_create", 00:04:24.437 "iscsi_get_histogram", 00:04:24.437 "iscsi_enable_histogram", 00:04:24.437 "iscsi_set_options", 00:04:24.437 "iscsi_get_auth_groups", 00:04:24.437 "iscsi_auth_group_remove_secret", 00:04:24.437 "iscsi_auth_group_add_secret", 00:04:24.437 "iscsi_delete_auth_group", 00:04:24.437 "iscsi_create_auth_group", 00:04:24.437 "iscsi_set_discovery_auth", 00:04:24.437 "iscsi_get_options", 00:04:24.437 "iscsi_target_node_request_logout", 00:04:24.437 "iscsi_target_node_set_redirect", 00:04:24.437 "iscsi_target_node_set_auth", 00:04:24.437 "iscsi_target_node_add_lun", 00:04:24.437 "iscsi_get_stats", 00:04:24.437 "iscsi_get_connections", 00:04:24.437 "iscsi_portal_group_set_auth", 00:04:24.437 "iscsi_start_portal_group", 00:04:24.437 "iscsi_delete_portal_group", 00:04:24.437 "iscsi_create_portal_group", 00:04:24.437 "iscsi_get_portal_groups", 00:04:24.437 "iscsi_delete_target_node", 00:04:24.437 "iscsi_target_node_remove_pg_ig_maps", 00:04:24.437 "iscsi_target_node_add_pg_ig_maps", 00:04:24.437 "iscsi_create_target_node", 00:04:24.437 "iscsi_get_target_nodes", 00:04:24.437 "iscsi_delete_initiator_group", 00:04:24.437 "iscsi_initiator_group_remove_initiators", 00:04:24.437 "iscsi_initiator_group_add_initiators", 00:04:24.437 "iscsi_create_initiator_group", 00:04:24.437 "iscsi_get_initiator_groups", 00:04:24.437 "nvmf_set_crdt", 00:04:24.437 "nvmf_set_config", 00:04:24.437 "nvmf_set_max_subsystems", 00:04:24.437 "nvmf_stop_mdns_prr", 00:04:24.437 "nvmf_publish_mdns_prr", 00:04:24.437 "nvmf_subsystem_get_listeners", 00:04:24.437 "nvmf_subsystem_get_qpairs", 00:04:24.437 "nvmf_subsystem_get_controllers", 00:04:24.437 "nvmf_get_stats", 00:04:24.437 "nvmf_get_transports", 00:04:24.437 "nvmf_create_transport", 00:04:24.437 "nvmf_get_targets", 00:04:24.437 "nvmf_delete_target", 00:04:24.437 "nvmf_create_target", 00:04:24.438 "nvmf_subsystem_allow_any_host", 00:04:24.438 "nvmf_subsystem_set_keys", 00:04:24.438 "nvmf_subsystem_remove_host", 00:04:24.438 "nvmf_subsystem_add_host", 00:04:24.438 "nvmf_ns_remove_host", 00:04:24.438 "nvmf_ns_add_host", 00:04:24.438 "nvmf_subsystem_remove_ns", 00:04:24.438 "nvmf_subsystem_set_ns_ana_group", 00:04:24.438 "nvmf_subsystem_add_ns", 00:04:24.438 "nvmf_subsystem_listener_set_ana_state", 00:04:24.438 "nvmf_discovery_get_referrals", 00:04:24.438 "nvmf_discovery_remove_referral", 00:04:24.438 "nvmf_discovery_add_referral", 00:04:24.438 "nvmf_subsystem_remove_listener", 00:04:24.438 "nvmf_subsystem_add_listener", 00:04:24.438 "nvmf_delete_subsystem", 00:04:24.438 "nvmf_create_subsystem", 00:04:24.438 "nvmf_get_subsystems", 00:04:24.438 "env_dpdk_get_mem_stats", 00:04:24.438 "nbd_get_disks", 00:04:24.438 "nbd_stop_disk", 00:04:24.438 "nbd_start_disk", 00:04:24.438 "ublk_recover_disk", 00:04:24.438 "ublk_get_disks", 00:04:24.438 "ublk_stop_disk", 00:04:24.438 "ublk_start_disk", 00:04:24.438 "ublk_destroy_target", 00:04:24.438 "ublk_create_target", 00:04:24.438 "virtio_blk_create_transport", 00:04:24.438 "virtio_blk_get_transports", 00:04:24.438 "vhost_controller_set_coalescing", 00:04:24.438 "vhost_get_controllers", 00:04:24.438 "vhost_delete_controller", 00:04:24.438 "vhost_create_blk_controller", 00:04:24.438 "vhost_scsi_controller_remove_target", 00:04:24.438 "vhost_scsi_controller_add_target", 00:04:24.438 "vhost_start_scsi_controller", 00:04:24.438 "vhost_create_scsi_controller", 00:04:24.438 "thread_set_cpumask", 00:04:24.438 "scheduler_set_options", 00:04:24.438 "framework_get_governor", 00:04:24.438 "framework_get_scheduler", 00:04:24.438 "framework_set_scheduler", 00:04:24.438 "framework_get_reactors", 00:04:24.438 "thread_get_io_channels", 00:04:24.438 "thread_get_pollers", 00:04:24.438 "thread_get_stats", 00:04:24.438 "framework_monitor_context_switch", 00:04:24.438 "spdk_kill_instance", 00:04:24.438 "log_enable_timestamps", 00:04:24.438 "log_get_flags", 00:04:24.438 "log_clear_flag", 00:04:24.438 "log_set_flag", 00:04:24.438 "log_get_level", 00:04:24.438 "log_set_level", 00:04:24.438 "log_get_print_level", 00:04:24.438 "log_set_print_level", 00:04:24.438 "framework_enable_cpumask_locks", 00:04:24.438 "framework_disable_cpumask_locks", 00:04:24.438 "framework_wait_init", 00:04:24.438 "framework_start_init", 00:04:24.438 "scsi_get_devices", 00:04:24.438 "bdev_get_histogram", 00:04:24.438 "bdev_enable_histogram", 00:04:24.438 "bdev_set_qos_limit", 00:04:24.438 "bdev_set_qd_sampling_period", 00:04:24.438 "bdev_get_bdevs", 00:04:24.438 "bdev_reset_iostat", 00:04:24.438 "bdev_get_iostat", 00:04:24.438 "bdev_examine", 00:04:24.438 "bdev_wait_for_examine", 00:04:24.438 "bdev_set_options", 00:04:24.438 "accel_get_stats", 00:04:24.438 "accel_set_options", 00:04:24.438 "accel_set_driver", 00:04:24.438 "accel_crypto_key_destroy", 00:04:24.438 "accel_crypto_keys_get", 00:04:24.438 "accel_crypto_key_create", 00:04:24.438 "accel_assign_opc", 00:04:24.438 "accel_get_module_info", 00:04:24.438 "accel_get_opc_assignments", 00:04:24.438 "vmd_rescan", 00:04:24.438 "vmd_remove_device", 00:04:24.438 "vmd_enable", 00:04:24.438 "sock_get_default_impl", 00:04:24.438 "sock_set_default_impl", 00:04:24.438 "sock_impl_set_options", 00:04:24.438 "sock_impl_get_options", 00:04:24.438 "iobuf_get_stats", 00:04:24.438 "iobuf_set_options", 00:04:24.438 "keyring_get_keys", 00:04:24.438 "vfu_tgt_set_base_path", 00:04:24.438 "framework_get_pci_devices", 00:04:24.438 "framework_get_config", 00:04:24.438 "framework_get_subsystems", 00:04:24.438 "fsdev_set_opts", 00:04:24.438 "fsdev_get_opts", 00:04:24.438 "trace_get_info", 00:04:24.438 "trace_get_tpoint_group_mask", 00:04:24.438 "trace_disable_tpoint_group", 00:04:24.438 "trace_enable_tpoint_group", 00:04:24.438 "trace_clear_tpoint_mask", 00:04:24.438 "trace_set_tpoint_mask", 00:04:24.438 "notify_get_notifications", 00:04:24.438 "notify_get_types", 00:04:24.438 "spdk_get_version", 00:04:24.438 "rpc_get_methods" 00:04:24.438 ] 00:04:24.438 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.438 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:24.438 14:45:17 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2922311 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2922311 ']' 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2922311 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:24.438 14:45:17 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2922311 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2922311' 00:04:24.698 killing process with pid 2922311 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2922311 00:04:24.698 14:45:17 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2922311 00:04:24.957 00:04:24.957 real 0m1.150s 00:04:24.957 user 0m1.912s 00:04:24.957 sys 0m0.445s 00:04:24.957 14:45:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.957 14:45:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:24.957 ************************************ 00:04:24.957 END TEST spdkcli_tcp 00:04:24.957 ************************************ 00:04:24.957 14:45:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.957 14:45:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.957 14:45:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.958 14:45:17 -- common/autotest_common.sh@10 -- # set +x 00:04:24.958 ************************************ 00:04:24.958 START TEST dpdk_mem_utility 00:04:24.958 ************************************ 00:04:24.958 14:45:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:24.958 * Looking for test storage... 00:04:24.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility 00:04:24.958 14:45:17 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:24.958 14:45:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:24.958 14:45:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:25.217 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.217 14:45:18 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:25.217 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.217 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:25.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.217 --rc genhtml_branch_coverage=1 00:04:25.217 --rc genhtml_function_coverage=1 00:04:25.217 --rc genhtml_legend=1 00:04:25.218 --rc geninfo_all_blocks=1 00:04:25.218 --rc geninfo_unexecuted_blocks=1 00:04:25.218 00:04:25.218 ' 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:25.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.218 --rc genhtml_branch_coverage=1 00:04:25.218 --rc genhtml_function_coverage=1 00:04:25.218 --rc genhtml_legend=1 00:04:25.218 --rc geninfo_all_blocks=1 00:04:25.218 --rc geninfo_unexecuted_blocks=1 00:04:25.218 00:04:25.218 ' 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:25.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.218 --rc genhtml_branch_coverage=1 00:04:25.218 --rc genhtml_function_coverage=1 00:04:25.218 --rc genhtml_legend=1 00:04:25.218 --rc geninfo_all_blocks=1 00:04:25.218 --rc geninfo_unexecuted_blocks=1 00:04:25.218 00:04:25.218 ' 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:25.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.218 --rc genhtml_branch_coverage=1 00:04:25.218 --rc genhtml_function_coverage=1 00:04:25.218 --rc genhtml_legend=1 00:04:25.218 --rc geninfo_all_blocks=1 00:04:25.218 --rc geninfo_unexecuted_blocks=1 00:04:25.218 00:04:25.218 ' 00:04:25.218 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:04:25.218 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2922613 00:04:25.218 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2922613 00:04:25.218 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2922613 ']' 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.218 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.218 [2024-12-11 14:45:18.122720] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:25.218 [2024-12-11 14:45:18.122768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922613 ] 00:04:25.218 [2024-12-11 14:45:18.197008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.218 [2024-12-11 14:45:18.236284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.477 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.477 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:25.477 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:25.477 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:25.477 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.477 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.477 { 00:04:25.477 "filename": "/tmp/spdk_mem_dump.txt" 00:04:25.477 } 00:04:25.477 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.478 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:04:25.478 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:25.478 1 heaps totaling size 818.000000 MiB 00:04:25.478 size: 818.000000 MiB heap id: 0 00:04:25.478 end heaps---------- 00:04:25.478 9 mempools totaling size 603.782043 MiB 00:04:25.478 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:25.478 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:25.478 size: 100.555481 MiB name: bdev_io_2922613 00:04:25.478 size: 50.003479 MiB name: msgpool_2922613 00:04:25.478 size: 36.509338 MiB name: fsdev_io_2922613 00:04:25.478 size: 21.763794 MiB name: PDU_Pool 00:04:25.478 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:25.478 size: 4.133484 MiB name: evtpool_2922613 00:04:25.478 size: 0.026123 MiB name: Session_Pool 00:04:25.478 end mempools------- 00:04:25.478 6 memzones totaling size 4.142822 MiB 00:04:25.478 size: 1.000366 MiB name: RG_ring_0_2922613 00:04:25.478 size: 1.000366 MiB name: RG_ring_1_2922613 00:04:25.478 size: 1.000366 MiB name: RG_ring_4_2922613 00:04:25.478 size: 1.000366 MiB name: RG_ring_5_2922613 00:04:25.478 size: 0.125366 MiB name: RG_ring_2_2922613 00:04:25.478 size: 0.015991 MiB name: RG_ring_3_2922613 00:04:25.478 end memzones------- 00:04:25.738 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py -m 0 00:04:25.738 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:25.738 list of free elements. size: 10.852478 MiB 00:04:25.738 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:25.738 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:25.738 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:25.738 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:25.738 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:25.738 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:25.738 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:25.738 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:25.738 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:25.738 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:25.738 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:25.738 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:25.738 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:25.738 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:25.738 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:25.738 list of standard malloc elements. size: 199.218628 MiB 00:04:25.738 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:25.738 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:25.738 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:25.738 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:25.738 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:25.738 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:25.738 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:25.738 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:25.738 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:25.738 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:25.738 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:25.738 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:25.738 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:25.738 list of memzone associated elements. size: 607.928894 MiB 00:04:25.738 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:25.738 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:25.738 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:25.738 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:25.738 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:25.738 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2922613_0 00:04:25.738 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:25.738 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2922613_0 00:04:25.738 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:25.738 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2922613_0 00:04:25.738 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:25.738 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:25.738 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:25.738 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:25.738 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:25.738 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2922613_0 00:04:25.738 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:25.738 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2922613 00:04:25.738 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:25.738 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2922613 00:04:25.738 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:25.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:25.738 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:25.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:25.738 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:25.738 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:25.738 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:25.738 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:25.738 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:25.738 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2922613 00:04:25.738 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:25.738 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2922613 00:04:25.738 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:25.738 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2922613 00:04:25.738 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:25.738 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2922613 00:04:25.739 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:25.739 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2922613 00:04:25.739 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:25.739 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2922613 00:04:25.739 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:25.739 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:25.739 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:25.739 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:25.739 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:25.739 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:25.739 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:25.739 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2922613 00:04:25.739 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:25.739 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2922613 00:04:25.739 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:25.739 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:25.739 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:25.739 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:25.739 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:25.739 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2922613 00:04:25.739 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:25.739 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:25.739 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:25.739 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2922613 00:04:25.739 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:25.739 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2922613 00:04:25.739 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:25.739 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2922613 00:04:25.739 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:25.739 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:25.739 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:25.739 14:45:18 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2922613 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2922613 ']' 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2922613 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2922613 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2922613' 00:04:25.739 killing process with pid 2922613 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2922613 00:04:25.739 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2922613 00:04:25.999 00:04:25.999 real 0m1.033s 00:04:25.999 user 0m0.955s 00:04:25.999 sys 0m0.423s 00:04:25.999 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.999 14:45:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:25.999 ************************************ 00:04:25.999 END TEST dpdk_mem_utility 00:04:25.999 ************************************ 00:04:25.999 14:45:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:04:25.999 14:45:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.999 14:45:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.999 14:45:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.999 ************************************ 00:04:25.999 START TEST event 00:04:25.999 ************************************ 00:04:25.999 14:45:18 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:04:26.259 * Looking for test storage... 00:04:26.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:26.259 14:45:19 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:26.259 14:45:19 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:26.259 14:45:19 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:26.259 14:45:19 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.259 14:45:19 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:26.259 14:45:19 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:26.259 14:45:19 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:26.259 14:45:19 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:26.259 14:45:19 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:26.259 14:45:19 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:26.259 14:45:19 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:26.259 14:45:19 event -- scripts/common.sh@344 -- # case "$op" in 00:04:26.259 14:45:19 event -- scripts/common.sh@345 -- # : 1 00:04:26.259 14:45:19 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:26.259 14:45:19 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.259 14:45:19 event -- scripts/common.sh@365 -- # decimal 1 00:04:26.259 14:45:19 event -- scripts/common.sh@353 -- # local d=1 00:04:26.259 14:45:19 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.259 14:45:19 event -- scripts/common.sh@355 -- # echo 1 00:04:26.259 14:45:19 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:26.259 14:45:19 event -- scripts/common.sh@366 -- # decimal 2 00:04:26.259 14:45:19 event -- scripts/common.sh@353 -- # local d=2 00:04:26.259 14:45:19 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.259 14:45:19 event -- scripts/common.sh@355 -- # echo 2 00:04:26.259 14:45:19 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:26.259 14:45:19 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:26.259 14:45:19 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:26.259 14:45:19 event -- scripts/common.sh@368 -- # return 0 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.259 --rc genhtml_branch_coverage=1 00:04:26.259 --rc genhtml_function_coverage=1 00:04:26.259 --rc genhtml_legend=1 00:04:26.259 --rc geninfo_all_blocks=1 00:04:26.259 --rc geninfo_unexecuted_blocks=1 00:04:26.259 00:04:26.259 ' 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.259 --rc genhtml_branch_coverage=1 00:04:26.259 --rc genhtml_function_coverage=1 00:04:26.259 --rc genhtml_legend=1 00:04:26.259 --rc geninfo_all_blocks=1 00:04:26.259 --rc geninfo_unexecuted_blocks=1 00:04:26.259 00:04:26.259 ' 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.259 --rc genhtml_branch_coverage=1 00:04:26.259 --rc genhtml_function_coverage=1 00:04:26.259 --rc genhtml_legend=1 00:04:26.259 --rc geninfo_all_blocks=1 00:04:26.259 --rc geninfo_unexecuted_blocks=1 00:04:26.259 00:04:26.259 ' 00:04:26.259 14:45:19 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:26.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.259 --rc genhtml_branch_coverage=1 00:04:26.259 --rc genhtml_function_coverage=1 00:04:26.259 --rc genhtml_legend=1 00:04:26.259 --rc geninfo_all_blocks=1 00:04:26.259 --rc geninfo_unexecuted_blocks=1 00:04:26.259 00:04:26.259 ' 00:04:26.260 14:45:19 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/nbd_common.sh 00:04:26.260 14:45:19 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:26.260 14:45:19 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.260 14:45:19 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:26.260 14:45:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:26.260 14:45:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.260 ************************************ 00:04:26.260 START TEST event_perf 00:04:26.260 ************************************ 00:04:26.260 14:45:19 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:26.260 Running I/O for 1 seconds...[2024-12-11 14:45:19.224652] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:26.260 [2024-12-11 14:45:19.224718] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922809 ] 00:04:26.260 [2024-12-11 14:45:19.301104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:26.519 [2024-12-11 14:45:19.345319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:26.519 [2024-12-11 14:45:19.345428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:26.519 [2024-12-11 14:45:19.345456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.519 [2024-12-11 14:45:19.345456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:27.457 Running I/O for 1 seconds... 00:04:27.457 lcore 0: 198875 00:04:27.457 lcore 1: 198875 00:04:27.457 lcore 2: 198875 00:04:27.457 lcore 3: 198875 00:04:27.457 done. 00:04:27.457 00:04:27.457 real 0m1.178s 00:04:27.457 user 0m4.102s 00:04:27.457 sys 0m0.074s 00:04:27.457 14:45:20 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.457 14:45:20 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:27.457 ************************************ 00:04:27.457 END TEST event_perf 00:04:27.457 ************************************ 00:04:27.457 14:45:20 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:04:27.457 14:45:20 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:27.457 14:45:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.457 14:45:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:27.457 ************************************ 00:04:27.457 START TEST event_reactor 00:04:27.457 ************************************ 00:04:27.457 14:45:20 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:04:27.457 [2024-12-11 14:45:20.478138] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:27.457 [2024-12-11 14:45:20.478339] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922985 ] 00:04:27.717 [2024-12-11 14:45:20.555503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.717 [2024-12-11 14:45:20.594677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.655 test_start 00:04:28.655 oneshot 00:04:28.655 tick 100 00:04:28.655 tick 100 00:04:28.655 tick 250 00:04:28.655 tick 100 00:04:28.655 tick 100 00:04:28.655 tick 100 00:04:28.655 tick 250 00:04:28.655 tick 500 00:04:28.655 tick 100 00:04:28.655 tick 100 00:04:28.655 tick 250 00:04:28.655 tick 100 00:04:28.655 tick 100 00:04:28.655 test_end 00:04:28.655 00:04:28.655 real 0m1.173s 00:04:28.655 user 0m1.094s 00:04:28.655 sys 0m0.075s 00:04:28.655 14:45:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.655 14:45:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:28.655 ************************************ 00:04:28.655 END TEST event_reactor 00:04:28.655 ************************************ 00:04:28.655 14:45:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.655 14:45:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:28.655 14:45:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.655 14:45:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.655 ************************************ 00:04:28.655 START TEST event_reactor_perf 00:04:28.655 ************************************ 00:04:28.655 14:45:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:28.915 [2024-12-11 14:45:21.721271] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:28.915 [2024-12-11 14:45:21.721340] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923199 ] 00:04:28.915 [2024-12-11 14:45:21.800006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.915 [2024-12-11 14:45:21.837836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.853 test_start 00:04:29.853 test_end 00:04:29.853 Performance: 507518 events per second 00:04:29.853 00:04:29.853 real 0m1.176s 00:04:29.853 user 0m1.092s 00:04:29.853 sys 0m0.079s 00:04:29.853 14:45:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.853 14:45:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.853 ************************************ 00:04:29.853 END TEST event_reactor_perf 00:04:29.853 ************************************ 00:04:30.113 14:45:22 event -- event/event.sh@49 -- # uname -s 00:04:30.113 14:45:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:30.113 14:45:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:04:30.113 14:45:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.113 14:45:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.113 14:45:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:30.113 ************************************ 00:04:30.113 START TEST event_scheduler 00:04:30.113 ************************************ 00:04:30.113 14:45:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:04:30.113 * Looking for test storage... 00:04:30.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler 00:04:30.113 14:45:23 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:30.113 14:45:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:30.113 14:45:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:30.113 14:45:23 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:30.113 14:45:23 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.114 14:45:23 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.114 14:45:23 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.114 14:45:23 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:30.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.114 --rc genhtml_branch_coverage=1 00:04:30.114 --rc genhtml_function_coverage=1 00:04:30.114 --rc genhtml_legend=1 00:04:30.114 --rc geninfo_all_blocks=1 00:04:30.114 --rc geninfo_unexecuted_blocks=1 00:04:30.114 00:04:30.114 ' 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:30.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.114 --rc genhtml_branch_coverage=1 00:04:30.114 --rc genhtml_function_coverage=1 00:04:30.114 --rc genhtml_legend=1 00:04:30.114 --rc geninfo_all_blocks=1 00:04:30.114 --rc geninfo_unexecuted_blocks=1 00:04:30.114 00:04:30.114 ' 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:30.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.114 --rc genhtml_branch_coverage=1 00:04:30.114 --rc genhtml_function_coverage=1 00:04:30.114 --rc genhtml_legend=1 00:04:30.114 --rc geninfo_all_blocks=1 00:04:30.114 --rc geninfo_unexecuted_blocks=1 00:04:30.114 00:04:30.114 ' 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:30.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.114 --rc genhtml_branch_coverage=1 00:04:30.114 --rc genhtml_function_coverage=1 00:04:30.114 --rc genhtml_legend=1 00:04:30.114 --rc geninfo_all_blocks=1 00:04:30.114 --rc geninfo_unexecuted_blocks=1 00:04:30.114 00:04:30.114 ' 00:04:30.114 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:30.114 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2923487 00:04:30.114 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:30.114 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.114 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2923487 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2923487 ']' 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.114 14:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.374 [2024-12-11 14:45:23.170253] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:30.374 [2024-12-11 14:45:23.170305] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2923487 ] 00:04:30.374 [2024-12-11 14:45:23.245007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:30.374 [2024-12-11 14:45:23.289763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.374 [2024-12-11 14:45:23.289872] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.374 [2024-12-11 14:45:23.289981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:30.374 [2024-12-11 14:45:23.289982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:30.374 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.374 [2024-12-11 14:45:23.330540] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:30.374 [2024-12-11 14:45:23.330557] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:30.374 [2024-12-11 14:45:23.330566] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:30.374 [2024-12-11 14:45:23.330572] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:30.374 [2024-12-11 14:45:23.330577] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.374 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.374 [2024-12-11 14:45:23.405215] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.374 14:45:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.374 14:45:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 ************************************ 00:04:30.634 START TEST scheduler_create_thread 00:04:30.634 ************************************ 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 2 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 3 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 4 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 5 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 6 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 7 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 8 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 9 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 10 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:30.634 14:45:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.203 14:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:31.203 14:45:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:31.203 14:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.203 14:45:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:32.581 14:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.581 14:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:32.581 14:45:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:32.581 14:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.581 14:45:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.518 14:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.518 00:04:33.518 real 0m3.100s 00:04:33.518 user 0m0.023s 00:04:33.518 sys 0m0.006s 00:04:33.518 14:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.518 14:45:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:33.518 ************************************ 00:04:33.518 END TEST scheduler_create_thread 00:04:33.518 ************************************ 00:04:33.776 14:45:26 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:33.776 14:45:26 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2923487 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2923487 ']' 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2923487 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2923487 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2923487' 00:04:33.776 killing process with pid 2923487 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2923487 00:04:33.776 14:45:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2923487 00:04:34.035 [2024-12-11 14:45:26.920400] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:34.294 00:04:34.294 real 0m4.157s 00:04:34.294 user 0m6.629s 00:04:34.294 sys 0m0.384s 00:04:34.294 14:45:27 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.294 14:45:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:34.294 ************************************ 00:04:34.294 END TEST event_scheduler 00:04:34.294 ************************************ 00:04:34.294 14:45:27 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:34.294 14:45:27 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:34.294 14:45:27 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.294 14:45:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.294 14:45:27 event -- common/autotest_common.sh@10 -- # set +x 00:04:34.294 ************************************ 00:04:34.294 START TEST app_repeat 00:04:34.294 ************************************ 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2924224 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2924224' 00:04:34.294 Process app_repeat pid: 2924224 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:34.294 spdk_app_start Round 0 00:04:34.294 14:45:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2924224 /var/tmp/spdk-nbd.sock 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2924224 ']' 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:34.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.294 14:45:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:34.294 [2024-12-11 14:45:27.220411] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:34.294 [2024-12-11 14:45:27.220462] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2924224 ] 00:04:34.295 [2024-12-11 14:45:27.293745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:34.295 [2024-12-11 14:45:27.333988] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.295 [2024-12-11 14:45:27.333990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.554 14:45:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.554 14:45:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:34.554 14:45:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.813 Malloc0 00:04:34.813 14:45:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:34.813 Malloc1 00:04:34.813 14:45:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:34.813 14:45:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:35.072 /dev/nbd0 00:04:35.072 14:45:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:35.072 14:45:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.072 1+0 records in 00:04:35.072 1+0 records out 00:04:35.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193084 s, 21.2 MB/s 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.072 14:45:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.072 14:45:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.072 14:45:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.072 14:45:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:35.331 /dev/nbd1 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:35.331 1+0 records in 00:04:35.331 1+0 records out 00:04:35.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249159 s, 16.4 MB/s 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:35.331 14:45:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.331 14:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:35.590 { 00:04:35.590 "nbd_device": "/dev/nbd0", 00:04:35.590 "bdev_name": "Malloc0" 00:04:35.590 }, 00:04:35.590 { 00:04:35.590 "nbd_device": "/dev/nbd1", 00:04:35.590 "bdev_name": "Malloc1" 00:04:35.590 } 00:04:35.590 ]' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:35.590 { 00:04:35.590 "nbd_device": "/dev/nbd0", 00:04:35.590 "bdev_name": "Malloc0" 00:04:35.590 }, 00:04:35.590 { 00:04:35.590 "nbd_device": "/dev/nbd1", 00:04:35.590 "bdev_name": "Malloc1" 00:04:35.590 } 00:04:35.590 ]' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:35.590 /dev/nbd1' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:35.590 /dev/nbd1' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:35.590 14:45:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:35.591 14:45:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:35.591 256+0 records in 00:04:35.591 256+0 records out 00:04:35.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105655 s, 99.2 MB/s 00:04:35.591 14:45:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.591 14:45:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:35.591 256+0 records in 00:04:35.591 256+0 records out 00:04:35.591 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146599 s, 71.5 MB/s 00:04:35.591 14:45:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:35.591 14:45:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:35.849 256+0 records in 00:04:35.849 256+0 records out 00:04:35.849 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0153816 s, 68.2 MB/s 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:35.849 14:45:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:36.108 14:45:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:36.367 14:45:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:36.367 14:45:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:36.625 14:45:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:36.885 [2024-12-11 14:45:29.729021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:36.885 [2024-12-11 14:45:29.766356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.885 [2024-12-11 14:45:29.766357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.885 [2024-12-11 14:45:29.807267] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:36.885 [2024-12-11 14:45:29.807306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:40.172 14:45:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:40.172 14:45:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:40.172 spdk_app_start Round 1 00:04:40.172 14:45:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2924224 /var/tmp/spdk-nbd.sock 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2924224 ']' 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:40.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.172 14:45:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:40.172 14:45:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.172 Malloc0 00:04:40.172 14:45:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:40.172 Malloc1 00:04:40.172 14:45:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:40.172 14:45:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.173 14:45:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:40.173 14:45:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:40.173 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:40.173 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.173 14:45:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:40.432 /dev/nbd0 00:04:40.432 14:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:40.432 14:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.432 1+0 records in 00:04:40.432 1+0 records out 00:04:40.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228638 s, 17.9 MB/s 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.432 14:45:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.432 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.432 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.432 14:45:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:40.691 /dev/nbd1 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:40.691 1+0 records in 00:04:40.691 1+0 records out 00:04:40.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393926 s, 10.4 MB/s 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:40.691 14:45:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.691 14:45:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:40.950 { 00:04:40.950 "nbd_device": "/dev/nbd0", 00:04:40.950 "bdev_name": "Malloc0" 00:04:40.950 }, 00:04:40.950 { 00:04:40.950 "nbd_device": "/dev/nbd1", 00:04:40.950 "bdev_name": "Malloc1" 00:04:40.950 } 00:04:40.950 ]' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:40.950 { 00:04:40.950 "nbd_device": "/dev/nbd0", 00:04:40.950 "bdev_name": "Malloc0" 00:04:40.950 }, 00:04:40.950 { 00:04:40.950 "nbd_device": "/dev/nbd1", 00:04:40.950 "bdev_name": "Malloc1" 00:04:40.950 } 00:04:40.950 ]' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:40.950 /dev/nbd1' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:40.950 /dev/nbd1' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:40.950 256+0 records in 00:04:40.950 256+0 records out 00:04:40.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00989655 s, 106 MB/s 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:40.950 256+0 records in 00:04:40.950 256+0 records out 00:04:40.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145076 s, 72.3 MB/s 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:40.950 256+0 records in 00:04:40.950 256+0 records out 00:04:40.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151265 s, 69.3 MB/s 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:40.950 14:45:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.208 14:45:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:41.209 14:45:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:41.467 14:45:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.468 14:45:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:41.726 14:45:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:41.726 14:45:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:41.985 14:45:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:41.985 [2024-12-11 14:45:35.025077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.243 [2024-12-11 14:45:35.063275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.243 [2024-12-11 14:45:35.063275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.243 [2024-12-11 14:45:35.104984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:42.243 [2024-12-11 14:45:35.105023] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:45.531 14:45:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:45.531 14:45:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:45.531 spdk_app_start Round 2 00:04:45.531 14:45:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2924224 /var/tmp/spdk-nbd.sock 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2924224 ']' 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:45.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.531 14:45:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:45.531 14:45:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.531 14:45:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:45.531 14:45:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.531 Malloc0 00:04:45.531 14:45:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:45.531 Malloc1 00:04:45.531 14:45:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.531 14:45:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:45.790 /dev/nbd0 00:04:45.790 14:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:45.790 14:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.790 1+0 records in 00:04:45.790 1+0 records out 00:04:45.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179749 s, 22.8 MB/s 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.790 14:45:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.790 14:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.790 14:45:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.790 14:45:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:46.048 /dev/nbd1 00:04:46.048 14:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:46.048 14:45:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:46.048 14:45:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:46.048 1+0 records in 00:04:46.048 1+0 records out 00:04:46.048 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230861 s, 17.7 MB/s 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:04:46.048 14:45:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:46.049 14:45:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:46.049 14:45:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:46.049 14:45:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:46.049 14:45:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.049 14:45:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.049 14:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:46.308 { 00:04:46.308 "nbd_device": "/dev/nbd0", 00:04:46.308 "bdev_name": "Malloc0" 00:04:46.308 }, 00:04:46.308 { 00:04:46.308 "nbd_device": "/dev/nbd1", 00:04:46.308 "bdev_name": "Malloc1" 00:04:46.308 } 00:04:46.308 ]' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:46.308 { 00:04:46.308 "nbd_device": "/dev/nbd0", 00:04:46.308 "bdev_name": "Malloc0" 00:04:46.308 }, 00:04:46.308 { 00:04:46.308 "nbd_device": "/dev/nbd1", 00:04:46.308 "bdev_name": "Malloc1" 00:04:46.308 } 00:04:46.308 ]' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:46.308 /dev/nbd1' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:46.308 /dev/nbd1' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:46.308 256+0 records in 00:04:46.308 256+0 records out 00:04:46.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106493 s, 98.5 MB/s 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:46.308 256+0 records in 00:04:46.308 256+0 records out 00:04:46.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143962 s, 72.8 MB/s 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:46.308 256+0 records in 00:04:46.308 256+0 records out 00:04:46.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149139 s, 70.3 MB/s 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.308 14:45:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:46.566 14:45:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.825 14:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:47.085 14:45:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:47.085 14:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:47.085 14:45:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:47.085 14:45:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:47.085 14:45:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:47.344 14:45:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:47.603 [2024-12-11 14:45:40.401039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.603 [2024-12-11 14:45:40.438974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.603 [2024-12-11 14:45:40.438974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.603 [2024-12-11 14:45:40.480429] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:47.603 [2024-12-11 14:45:40.480466] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.893 14:45:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2924224 /var/tmp/spdk-nbd.sock 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2924224 ']' 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.893 14:45:43 event.app_repeat -- event/event.sh@39 -- # killprocess 2924224 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2924224 ']' 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2924224 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2924224 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2924224' 00:04:50.893 killing process with pid 2924224 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2924224 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2924224 00:04:50.893 spdk_app_start is called in Round 0. 00:04:50.893 Shutdown signal received, stop current app iteration 00:04:50.893 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:04:50.893 spdk_app_start is called in Round 1. 00:04:50.893 Shutdown signal received, stop current app iteration 00:04:50.893 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:04:50.893 spdk_app_start is called in Round 2. 00:04:50.893 Shutdown signal received, stop current app iteration 00:04:50.893 Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 reinitialization... 00:04:50.893 spdk_app_start is called in Round 3. 00:04:50.893 Shutdown signal received, stop current app iteration 00:04:50.893 14:45:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:50.893 14:45:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:50.893 00:04:50.893 real 0m16.471s 00:04:50.893 user 0m36.273s 00:04:50.893 sys 0m2.541s 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.893 14:45:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.893 ************************************ 00:04:50.893 END TEST app_repeat 00:04:50.893 ************************************ 00:04:50.893 14:45:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:50.893 14:45:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:04:50.893 14:45:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.893 14:45:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.893 14:45:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.893 ************************************ 00:04:50.893 START TEST cpu_locks 00:04:50.893 ************************************ 00:04:50.893 14:45:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:04:50.893 * Looking for test storage... 00:04:50.893 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:04:50.893 14:45:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:50.893 14:45:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:04:50.893 14:45:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:50.893 14:45:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:50.893 14:45:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.893 14:45:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.893 14:45:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.893 14:45:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.894 14:45:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.894 --rc genhtml_branch_coverage=1 00:04:50.894 --rc genhtml_function_coverage=1 00:04:50.894 --rc genhtml_legend=1 00:04:50.894 --rc geninfo_all_blocks=1 00:04:50.894 --rc geninfo_unexecuted_blocks=1 00:04:50.894 00:04:50.894 ' 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.894 --rc genhtml_branch_coverage=1 00:04:50.894 --rc genhtml_function_coverage=1 00:04:50.894 --rc genhtml_legend=1 00:04:50.894 --rc geninfo_all_blocks=1 00:04:50.894 --rc geninfo_unexecuted_blocks=1 00:04:50.894 00:04:50.894 ' 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.894 --rc genhtml_branch_coverage=1 00:04:50.894 --rc genhtml_function_coverage=1 00:04:50.894 --rc genhtml_legend=1 00:04:50.894 --rc geninfo_all_blocks=1 00:04:50.894 --rc geninfo_unexecuted_blocks=1 00:04:50.894 00:04:50.894 ' 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:50.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.894 --rc genhtml_branch_coverage=1 00:04:50.894 --rc genhtml_function_coverage=1 00:04:50.894 --rc genhtml_legend=1 00:04:50.894 --rc geninfo_all_blocks=1 00:04:50.894 --rc geninfo_unexecuted_blocks=1 00:04:50.894 00:04:50.894 ' 00:04:50.894 14:45:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:50.894 14:45:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:50.894 14:45:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:50.894 14:45:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.894 14:45:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:50.894 ************************************ 00:04:50.894 START TEST default_locks 00:04:50.894 ************************************ 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2927331 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2927331 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2927331 ']' 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.894 14:45:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:51.154 [2024-12-11 14:45:43.985022] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:51.154 [2024-12-11 14:45:43.985065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927331 ] 00:04:51.154 [2024-12-11 14:45:44.059704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.154 [2024-12-11 14:45:44.099521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.413 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.413 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:51.413 14:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2927331 00:04:51.413 14:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2927331 00:04:51.413 14:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:51.981 lslocks: write error 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2927331 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2927331 ']' 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2927331 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927331 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927331' 00:04:51.981 killing process with pid 2927331 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2927331 00:04:51.981 14:45:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2927331 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2927331 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2927331 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2927331 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2927331 ']' 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2927331) - No such process 00:04:52.246 ERROR: process (pid: 2927331) is no longer running 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.246 00:04:52.246 real 0m1.360s 00:04:52.246 user 0m1.326s 00:04:52.246 sys 0m0.588s 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.246 14:45:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.246 ************************************ 00:04:52.246 END TEST default_locks 00:04:52.246 ************************************ 00:04:52.507 14:45:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:52.507 14:45:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.507 14:45:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.507 14:45:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.507 ************************************ 00:04:52.507 START TEST default_locks_via_rpc 00:04:52.507 ************************************ 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2927641 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2927641 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2927641 ']' 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.507 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.507 [2024-12-11 14:45:45.416474] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:52.507 [2024-12-11 14:45:45.416516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927641 ] 00:04:52.507 [2024-12-11 14:45:45.493659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.507 [2024-12-11 14:45:45.535522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2927641 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2927641 00:04:52.767 14:45:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2927641 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2927641 ']' 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2927641 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927641 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927641' 00:04:53.334 killing process with pid 2927641 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2927641 00:04:53.334 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2927641 00:04:53.593 00:04:53.593 real 0m1.226s 00:04:53.593 user 0m1.176s 00:04:53.593 sys 0m0.554s 00:04:53.593 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.593 14:45:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.593 ************************************ 00:04:53.593 END TEST default_locks_via_rpc 00:04:53.593 ************************************ 00:04:53.593 14:45:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:53.593 14:45:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.593 14:45:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.593 14:45:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:53.852 ************************************ 00:04:53.853 START TEST non_locking_app_on_locked_coremask 00:04:53.853 ************************************ 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2927853 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2927853 /var/tmp/spdk.sock 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2927853 ']' 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.853 14:45:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:53.853 [2024-12-11 14:45:46.711749] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:53.853 [2024-12-11 14:45:46.711792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927853 ] 00:04:53.853 [2024-12-11 14:45:46.788693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.853 [2024-12-11 14:45:46.829904] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2927957 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2927957 /var/tmp/spdk2.sock 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2927957 ']' 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:54.112 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:54.112 [2024-12-11 14:45:47.085334] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:54.112 [2024-12-11 14:45:47.085382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927957 ] 00:04:54.371 [2024-12-11 14:45:47.178477] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:54.371 [2024-12-11 14:45:47.178497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.371 [2024-12-11 14:45:47.258858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.939 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.939 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:54.939 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2927853 00:04:54.939 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2927853 00:04:54.939 14:45:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:55.507 lslocks: write error 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2927853 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2927853 ']' 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2927853 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.507 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927853 00:04:55.766 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:55.766 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:55.766 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927853' 00:04:55.766 killing process with pid 2927853 00:04:55.766 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2927853 00:04:55.766 14:45:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2927853 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2927957 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2927957 ']' 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2927957 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2927957 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2927957' 00:04:56.334 killing process with pid 2927957 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2927957 00:04:56.334 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2927957 00:04:56.593 00:04:56.593 real 0m2.858s 00:04:56.593 user 0m3.014s 00:04:56.593 sys 0m0.937s 00:04:56.593 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.593 14:45:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.594 ************************************ 00:04:56.594 END TEST non_locking_app_on_locked_coremask 00:04:56.594 ************************************ 00:04:56.594 14:45:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:56.594 14:45:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.594 14:45:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.594 14:45:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:56.594 ************************************ 00:04:56.594 START TEST locking_app_on_unlocked_coremask 00:04:56.594 ************************************ 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2928451 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2928451 /var/tmp/spdk.sock 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2928451 ']' 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.594 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 [2024-12-11 14:45:49.643482] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:56.853 [2024-12-11 14:45:49.643526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928451 ] 00:04:56.853 [2024-12-11 14:45:49.719472] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:56.853 [2024-12-11 14:45:49.719498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.853 [2024-12-11 14:45:49.760782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2928461 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2928461 /var/tmp/spdk2.sock 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2928461 ']' 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.112 14:45:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:57.112 [2024-12-11 14:45:50.021436] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:57.112 [2024-12-11 14:45:50.021489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928461 ] 00:04:57.112 [2024-12-11 14:45:50.115836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.371 [2024-12-11 14:45:50.204548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.939 14:45:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.939 14:45:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:57.939 14:45:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2928461 00:04:57.939 14:45:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2928461 00:04:57.939 14:45:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:58.507 lslocks: write error 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2928451 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2928451 ']' 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2928451 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.507 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2928451 00:04:58.508 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.508 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.508 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2928451' 00:04:58.508 killing process with pid 2928451 00:04:58.508 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2928451 00:04:58.508 14:45:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2928451 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2928461 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2928461 ']' 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2928461 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2928461 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.076 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2928461' 00:04:59.077 killing process with pid 2928461 00:04:59.077 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2928461 00:04:59.077 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2928461 00:04:59.645 00:04:59.645 real 0m2.811s 00:04:59.645 user 0m2.967s 00:04:59.645 sys 0m0.942s 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.645 ************************************ 00:04:59.645 END TEST locking_app_on_unlocked_coremask 00:04:59.645 ************************************ 00:04:59.645 14:45:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:59.645 14:45:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.645 14:45:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.645 14:45:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.645 ************************************ 00:04:59.645 START TEST locking_app_on_locked_coremask 00:04:59.645 ************************************ 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2928954 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2928954 /var/tmp/spdk.sock 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2928954 ']' 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.645 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.645 [2024-12-11 14:45:52.522070] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:59.645 [2024-12-11 14:45:52.522115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928954 ] 00:04:59.645 [2024-12-11 14:45:52.596792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.645 [2024-12-11 14:45:52.633639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2928961 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2928961 /var/tmp/spdk2.sock 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2928961 /var/tmp/spdk2.sock 00:04:59.904 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2928961 /var/tmp/spdk2.sock 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2928961 ']' 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.905 14:45:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:59.905 [2024-12-11 14:45:52.911630] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:04:59.905 [2024-12-11 14:45:52.911676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928961 ] 00:05:00.163 [2024-12-11 14:45:53.002769] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2928954 has claimed it. 00:05:00.163 [2024-12-11 14:45:53.002806] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:00.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2928961) - No such process 00:05:00.730 ERROR: process (pid: 2928961) is no longer running 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:00.730 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:00.731 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2928954 00:05:00.731 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2928954 00:05:00.731 14:45:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.989 lslocks: write error 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2928954 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2928954 ']' 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2928954 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:00.989 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2928954 00:05:01.248 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.248 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.248 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2928954' 00:05:01.248 killing process with pid 2928954 00:05:01.248 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2928954 00:05:01.248 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2928954 00:05:01.508 00:05:01.508 real 0m1.912s 00:05:01.508 user 0m2.030s 00:05:01.508 sys 0m0.665s 00:05:01.508 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.508 14:45:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.508 ************************************ 00:05:01.508 END TEST locking_app_on_locked_coremask 00:05:01.508 ************************************ 00:05:01.508 14:45:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:01.508 14:45:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.508 14:45:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.508 14:45:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.508 ************************************ 00:05:01.508 START TEST locking_overlapped_coremask 00:05:01.508 ************************************ 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2929218 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2929218 /var/tmp/spdk.sock 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2929218 ']' 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.508 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:01.508 [2024-12-11 14:45:54.502395] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:01.508 [2024-12-11 14:45:54.502441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929218 ] 00:05:01.766 [2024-12-11 14:45:54.579823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:01.766 [2024-12-11 14:45:54.619726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.766 [2024-12-11 14:45:54.619835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.766 [2024-12-11 14:45:54.619836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.024 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2929289 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2929289 /var/tmp/spdk2.sock 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2929289 /var/tmp/spdk2.sock 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2929289 /var/tmp/spdk2.sock 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2929289 ']' 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:02.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.025 14:45:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.025 [2024-12-11 14:45:54.889641] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:02.025 [2024-12-11 14:45:54.889691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929289 ] 00:05:02.025 [2024-12-11 14:45:54.984899] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2929218 has claimed it. 00:05:02.025 [2024-12-11 14:45:54.984941] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:02.596 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2929289) - No such process 00:05:02.596 ERROR: process (pid: 2929289) is no longer running 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2929218 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2929218 ']' 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2929218 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929218 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929218' 00:05:02.596 killing process with pid 2929218 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2929218 00:05:02.596 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2929218 00:05:02.856 00:05:02.856 real 0m1.453s 00:05:02.856 user 0m4.015s 00:05:02.856 sys 0m0.400s 00:05:02.856 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.856 14:45:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:02.856 ************************************ 00:05:02.856 END TEST locking_overlapped_coremask 00:05:02.856 ************************************ 00:05:03.116 14:45:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:03.116 14:45:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.116 14:45:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.116 14:45:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.116 ************************************ 00:05:03.116 START TEST locking_overlapped_coremask_via_rpc 00:05:03.116 ************************************ 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2929492 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2929492 /var/tmp/spdk.sock 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2929492 ']' 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.116 14:45:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.116 [2024-12-11 14:45:56.019688] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:03.116 [2024-12-11 14:45:56.019730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929492 ] 00:05:03.116 [2024-12-11 14:45:56.094201] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.116 [2024-12-11 14:45:56.094229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.116 [2024-12-11 14:45:56.137129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.116 [2024-12-11 14:45:56.137238] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.116 [2024-12-11 14:45:56.137237] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2929653 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2929653 /var/tmp/spdk2.sock 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2929653 ']' 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.396 14:45:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.396 [2024-12-11 14:45:56.423974] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:03.396 [2024-12-11 14:45:56.424028] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2929653 ] 00:05:03.704 [2024-12-11 14:45:56.518526] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.704 [2024-12-11 14:45:56.518553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:03.704 [2024-12-11 14:45:56.605336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.704 [2024-12-11 14:45:56.605449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.704 [2024-12-11 14:45:56.605450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.301 [2024-12-11 14:45:57.291239] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2929492 has claimed it. 00:05:04.301 request: 00:05:04.301 { 00:05:04.301 "method": "framework_enable_cpumask_locks", 00:05:04.301 "req_id": 1 00:05:04.301 } 00:05:04.301 Got JSON-RPC error response 00:05:04.301 response: 00:05:04.301 { 00:05:04.301 "code": -32603, 00:05:04.301 "message": "Failed to claim CPU core: 2" 00:05:04.301 } 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2929492 /var/tmp/spdk.sock 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2929492 ']' 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.301 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2929653 /var/tmp/spdk2.sock 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2929653 ']' 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:04.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.560 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:04.820 00:05:04.820 real 0m1.739s 00:05:04.820 user 0m0.851s 00:05:04.820 sys 0m0.126s 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.820 14:45:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.820 ************************************ 00:05:04.820 END TEST locking_overlapped_coremask_via_rpc 00:05:04.820 ************************************ 00:05:04.820 14:45:57 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:04.820 14:45:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2929492 ]] 00:05:04.820 14:45:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2929492 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2929492 ']' 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2929492 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929492 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929492' 00:05:04.820 killing process with pid 2929492 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2929492 00:05:04.820 14:45:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2929492 00:05:05.079 14:45:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2929653 ]] 00:05:05.079 14:45:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2929653 00:05:05.079 14:45:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2929653 ']' 00:05:05.079 14:45:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2929653 00:05:05.079 14:45:58 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:05.079 14:45:58 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.079 14:45:58 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929653 00:05:05.338 14:45:58 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:05.338 14:45:58 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:05.338 14:45:58 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929653' 00:05:05.338 killing process with pid 2929653 00:05:05.338 14:45:58 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2929653 00:05:05.338 14:45:58 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2929653 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2929492 ]] 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2929492 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2929492 ']' 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2929492 00:05:05.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2929492) - No such process 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2929492 is not found' 00:05:05.597 Process with pid 2929492 is not found 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2929653 ]] 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2929653 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2929653 ']' 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2929653 00:05:05.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2929653) - No such process 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2929653 is not found' 00:05:05.597 Process with pid 2929653 is not found 00:05:05.597 14:45:58 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:05.597 00:05:05.597 real 0m14.748s 00:05:05.597 user 0m25.216s 00:05:05.597 sys 0m5.194s 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.597 14:45:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.597 ************************************ 00:05:05.597 END TEST cpu_locks 00:05:05.597 ************************************ 00:05:05.597 00:05:05.597 real 0m39.510s 00:05:05.597 user 1m14.683s 00:05:05.597 sys 0m8.716s 00:05:05.597 14:45:58 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.597 14:45:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.597 ************************************ 00:05:05.597 END TEST event 00:05:05.597 ************************************ 00:05:05.597 14:45:58 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:05:05.597 14:45:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.597 14:45:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.597 14:45:58 -- common/autotest_common.sh@10 -- # set +x 00:05:05.597 ************************************ 00:05:05.597 START TEST thread 00:05:05.597 ************************************ 00:05:05.597 14:45:58 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:05:05.857 * Looking for test storage... 00:05:05.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.857 14:45:58 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.857 14:45:58 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.857 14:45:58 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.857 14:45:58 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.857 14:45:58 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.857 14:45:58 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.857 14:45:58 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.857 14:45:58 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.857 14:45:58 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.857 14:45:58 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.857 14:45:58 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.857 14:45:58 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:05.857 14:45:58 thread -- scripts/common.sh@345 -- # : 1 00:05:05.857 14:45:58 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.857 14:45:58 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.857 14:45:58 thread -- scripts/common.sh@365 -- # decimal 1 00:05:05.857 14:45:58 thread -- scripts/common.sh@353 -- # local d=1 00:05:05.857 14:45:58 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.857 14:45:58 thread -- scripts/common.sh@355 -- # echo 1 00:05:05.857 14:45:58 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.857 14:45:58 thread -- scripts/common.sh@366 -- # decimal 2 00:05:05.857 14:45:58 thread -- scripts/common.sh@353 -- # local d=2 00:05:05.857 14:45:58 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.857 14:45:58 thread -- scripts/common.sh@355 -- # echo 2 00:05:05.857 14:45:58 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.857 14:45:58 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.857 14:45:58 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.857 14:45:58 thread -- scripts/common.sh@368 -- # return 0 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.857 --rc genhtml_branch_coverage=1 00:05:05.857 --rc genhtml_function_coverage=1 00:05:05.857 --rc genhtml_legend=1 00:05:05.857 --rc geninfo_all_blocks=1 00:05:05.857 --rc geninfo_unexecuted_blocks=1 00:05:05.857 00:05:05.857 ' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.857 --rc genhtml_branch_coverage=1 00:05:05.857 --rc genhtml_function_coverage=1 00:05:05.857 --rc genhtml_legend=1 00:05:05.857 --rc geninfo_all_blocks=1 00:05:05.857 --rc geninfo_unexecuted_blocks=1 00:05:05.857 00:05:05.857 ' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.857 --rc genhtml_branch_coverage=1 00:05:05.857 --rc genhtml_function_coverage=1 00:05:05.857 --rc genhtml_legend=1 00:05:05.857 --rc geninfo_all_blocks=1 00:05:05.857 --rc geninfo_unexecuted_blocks=1 00:05:05.857 00:05:05.857 ' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.857 --rc genhtml_branch_coverage=1 00:05:05.857 --rc genhtml_function_coverage=1 00:05:05.857 --rc genhtml_legend=1 00:05:05.857 --rc geninfo_all_blocks=1 00:05:05.857 --rc geninfo_unexecuted_blocks=1 00:05:05.857 00:05:05.857 ' 00:05:05.857 14:45:58 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.857 14:45:58 thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.857 ************************************ 00:05:05.857 START TEST thread_poller_perf 00:05:05.857 ************************************ 00:05:05.857 14:45:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:05.857 [2024-12-11 14:45:58.810343] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:05.857 [2024-12-11 14:45:58.810411] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930070 ] 00:05:05.857 [2024-12-11 14:45:58.890753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.116 [2024-12-11 14:45:58.930968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.116 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:07.054 [2024-12-11T13:46:00.102Z] ====================================== 00:05:07.054 [2024-12-11T13:46:00.102Z] busy:2307261492 (cyc) 00:05:07.054 [2024-12-11T13:46:00.102Z] total_run_count: 411000 00:05:07.054 [2024-12-11T13:46:00.102Z] tsc_hz: 2300000000 (cyc) 00:05:07.054 [2024-12-11T13:46:00.102Z] ====================================== 00:05:07.054 [2024-12-11T13:46:00.102Z] poller_cost: 5613 (cyc), 2440 (nsec) 00:05:07.054 00:05:07.054 real 0m1.185s 00:05:07.054 user 0m1.098s 00:05:07.054 sys 0m0.082s 00:05:07.054 14:45:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.054 14:45:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:07.054 ************************************ 00:05:07.054 END TEST thread_poller_perf 00:05:07.054 ************************************ 00:05:07.054 14:46:00 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.054 14:46:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:07.054 14:46:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.054 14:46:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.054 ************************************ 00:05:07.054 START TEST thread_poller_perf 00:05:07.054 ************************************ 00:05:07.054 14:46:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:07.054 [2024-12-11 14:46:00.066630] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:07.054 [2024-12-11 14:46:00.066700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930317 ] 00:05:07.313 [2024-12-11 14:46:00.144513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.313 [2024-12-11 14:46:00.183463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.313 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:08.250 [2024-12-11T13:46:01.298Z] ====================================== 00:05:08.250 [2024-12-11T13:46:01.298Z] busy:2301884082 (cyc) 00:05:08.250 [2024-12-11T13:46:01.298Z] total_run_count: 4940000 00:05:08.250 [2024-12-11T13:46:01.298Z] tsc_hz: 2300000000 (cyc) 00:05:08.250 [2024-12-11T13:46:01.298Z] ====================================== 00:05:08.250 [2024-12-11T13:46:01.298Z] poller_cost: 465 (cyc), 202 (nsec) 00:05:08.250 00:05:08.250 real 0m1.179s 00:05:08.250 user 0m1.101s 00:05:08.250 sys 0m0.073s 00:05:08.250 14:46:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.250 14:46:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:08.250 ************************************ 00:05:08.250 END TEST thread_poller_perf 00:05:08.250 ************************************ 00:05:08.250 14:46:01 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:08.250 00:05:08.250 real 0m2.685s 00:05:08.250 user 0m2.351s 00:05:08.250 sys 0m0.348s 00:05:08.250 14:46:01 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.250 14:46:01 thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.250 ************************************ 00:05:08.250 END TEST thread 00:05:08.250 ************************************ 00:05:08.250 14:46:01 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:08.250 14:46:01 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:05:08.250 14:46:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.250 14:46:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.250 14:46:01 -- common/autotest_common.sh@10 -- # set +x 00:05:08.508 ************************************ 00:05:08.508 START TEST app_cmdline 00:05:08.508 ************************************ 00:05:08.508 14:46:01 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:05:08.508 * Looking for test storage... 00:05:08.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:05:08.508 14:46:01 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.508 14:46:01 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.508 14:46:01 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.508 14:46:01 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.508 14:46:01 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.509 14:46:01 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.509 --rc genhtml_branch_coverage=1 00:05:08.509 --rc genhtml_function_coverage=1 00:05:08.509 --rc genhtml_legend=1 00:05:08.509 --rc geninfo_all_blocks=1 00:05:08.509 --rc geninfo_unexecuted_blocks=1 00:05:08.509 00:05:08.509 ' 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.509 --rc genhtml_branch_coverage=1 00:05:08.509 --rc genhtml_function_coverage=1 00:05:08.509 --rc genhtml_legend=1 00:05:08.509 --rc geninfo_all_blocks=1 00:05:08.509 --rc geninfo_unexecuted_blocks=1 00:05:08.509 00:05:08.509 ' 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.509 --rc genhtml_branch_coverage=1 00:05:08.509 --rc genhtml_function_coverage=1 00:05:08.509 --rc genhtml_legend=1 00:05:08.509 --rc geninfo_all_blocks=1 00:05:08.509 --rc geninfo_unexecuted_blocks=1 00:05:08.509 00:05:08.509 ' 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.509 --rc genhtml_branch_coverage=1 00:05:08.509 --rc genhtml_function_coverage=1 00:05:08.509 --rc genhtml_legend=1 00:05:08.509 --rc geninfo_all_blocks=1 00:05:08.509 --rc geninfo_unexecuted_blocks=1 00:05:08.509 00:05:08.509 ' 00:05:08.509 14:46:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:08.509 14:46:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2930612 00:05:08.509 14:46:01 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:08.509 14:46:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2930612 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2930612 ']' 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.509 14:46:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:08.509 [2024-12-11 14:46:01.555320] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:08.509 [2024-12-11 14:46:01.555369] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2930612 ] 00:05:08.768 [2024-12-11 14:46:01.631844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.768 [2024-12-11 14:46:01.673365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.027 14:46:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.027 14:46:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:09.027 14:46:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py spdk_get_version 00:05:09.027 { 00:05:09.027 "version": "SPDK v25.01-pre git sha1 4dfeb7f95", 00:05:09.027 "fields": { 00:05:09.027 "major": 25, 00:05:09.027 "minor": 1, 00:05:09.027 "patch": 0, 00:05:09.027 "suffix": "-pre", 00:05:09.027 "commit": "4dfeb7f95" 00:05:09.027 } 00:05:09.027 } 00:05:09.027 14:46:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:09.027 14:46:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:09.027 14:46:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:09.027 14:46:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:09.286 request: 00:05:09.286 { 00:05:09.286 "method": "env_dpdk_get_mem_stats", 00:05:09.286 "req_id": 1 00:05:09.286 } 00:05:09.286 Got JSON-RPC error response 00:05:09.286 response: 00:05:09.286 { 00:05:09.286 "code": -32601, 00:05:09.286 "message": "Method not found" 00:05:09.286 } 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.286 14:46:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2930612 00:05:09.286 14:46:02 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2930612 ']' 00:05:09.287 14:46:02 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2930612 00:05:09.287 14:46:02 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:09.287 14:46:02 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.287 14:46:02 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2930612 00:05:09.546 14:46:02 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.546 14:46:02 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.546 14:46:02 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2930612' 00:05:09.546 killing process with pid 2930612 00:05:09.546 14:46:02 app_cmdline -- common/autotest_common.sh@973 -- # kill 2930612 00:05:09.546 14:46:02 app_cmdline -- common/autotest_common.sh@978 -- # wait 2930612 00:05:09.805 00:05:09.805 real 0m1.337s 00:05:09.805 user 0m1.553s 00:05:09.805 sys 0m0.454s 00:05:09.805 14:46:02 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.805 14:46:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:09.805 ************************************ 00:05:09.805 END TEST app_cmdline 00:05:09.805 ************************************ 00:05:09.805 14:46:02 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:05:09.805 14:46:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.805 14:46:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.805 14:46:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.805 ************************************ 00:05:09.805 START TEST version 00:05:09.805 ************************************ 00:05:09.805 14:46:02 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:05:09.805 * Looking for test storage... 00:05:09.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:05:09.805 14:46:02 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.805 14:46:02 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.805 14:46:02 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.065 14:46:02 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.065 14:46:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.065 14:46:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.065 14:46:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.065 14:46:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.065 14:46:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.065 14:46:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.065 14:46:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.065 14:46:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.065 14:46:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.065 14:46:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.065 14:46:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.065 14:46:02 version -- scripts/common.sh@344 -- # case "$op" in 00:05:10.065 14:46:02 version -- scripts/common.sh@345 -- # : 1 00:05:10.065 14:46:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.065 14:46:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.065 14:46:02 version -- scripts/common.sh@365 -- # decimal 1 00:05:10.065 14:46:02 version -- scripts/common.sh@353 -- # local d=1 00:05:10.065 14:46:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.065 14:46:02 version -- scripts/common.sh@355 -- # echo 1 00:05:10.065 14:46:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.065 14:46:02 version -- scripts/common.sh@366 -- # decimal 2 00:05:10.065 14:46:02 version -- scripts/common.sh@353 -- # local d=2 00:05:10.065 14:46:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.065 14:46:02 version -- scripts/common.sh@355 -- # echo 2 00:05:10.065 14:46:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.065 14:46:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.065 14:46:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.065 14:46:02 version -- scripts/common.sh@368 -- # return 0 00:05:10.065 14:46:02 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.065 14:46:02 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.065 --rc genhtml_branch_coverage=1 00:05:10.065 --rc genhtml_function_coverage=1 00:05:10.065 --rc genhtml_legend=1 00:05:10.065 --rc geninfo_all_blocks=1 00:05:10.065 --rc geninfo_unexecuted_blocks=1 00:05:10.065 00:05:10.065 ' 00:05:10.065 14:46:02 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.065 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.065 --rc genhtml_branch_coverage=1 00:05:10.065 --rc genhtml_function_coverage=1 00:05:10.065 --rc genhtml_legend=1 00:05:10.065 --rc geninfo_all_blocks=1 00:05:10.065 --rc geninfo_unexecuted_blocks=1 00:05:10.066 00:05:10.066 ' 00:05:10.066 14:46:02 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.066 --rc genhtml_branch_coverage=1 00:05:10.066 --rc genhtml_function_coverage=1 00:05:10.066 --rc genhtml_legend=1 00:05:10.066 --rc geninfo_all_blocks=1 00:05:10.066 --rc geninfo_unexecuted_blocks=1 00:05:10.066 00:05:10.066 ' 00:05:10.066 14:46:02 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.066 --rc genhtml_branch_coverage=1 00:05:10.066 --rc genhtml_function_coverage=1 00:05:10.066 --rc genhtml_legend=1 00:05:10.066 --rc geninfo_all_blocks=1 00:05:10.066 --rc geninfo_unexecuted_blocks=1 00:05:10.066 00:05:10.066 ' 00:05:10.066 14:46:02 version -- app/version.sh@17 -- # get_header_version major 00:05:10.066 14:46:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # cut -f2 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.066 14:46:02 version -- app/version.sh@17 -- # major=25 00:05:10.066 14:46:02 version -- app/version.sh@18 -- # get_header_version minor 00:05:10.066 14:46:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # cut -f2 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.066 14:46:02 version -- app/version.sh@18 -- # minor=1 00:05:10.066 14:46:02 version -- app/version.sh@19 -- # get_header_version patch 00:05:10.066 14:46:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # cut -f2 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.066 14:46:02 version -- app/version.sh@19 -- # patch=0 00:05:10.066 14:46:02 version -- app/version.sh@20 -- # get_header_version suffix 00:05:10.066 14:46:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # cut -f2 00:05:10.066 14:46:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:10.066 14:46:02 version -- app/version.sh@20 -- # suffix=-pre 00:05:10.066 14:46:02 version -- app/version.sh@22 -- # version=25.1 00:05:10.066 14:46:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:10.066 14:46:02 version -- app/version.sh@28 -- # version=25.1rc0 00:05:10.066 14:46:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:05:10.066 14:46:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:10.066 14:46:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:10.066 14:46:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:10.066 00:05:10.066 real 0m0.245s 00:05:10.066 user 0m0.143s 00:05:10.066 sys 0m0.144s 00:05:10.066 14:46:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.066 14:46:02 version -- common/autotest_common.sh@10 -- # set +x 00:05:10.066 ************************************ 00:05:10.066 END TEST version 00:05:10.066 ************************************ 00:05:10.066 14:46:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:10.066 14:46:03 -- spdk/autotest.sh@194 -- # uname -s 00:05:10.066 14:46:03 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:10.066 14:46:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:10.066 14:46:03 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:10.066 14:46:03 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:10.066 14:46:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:10.066 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:05:10.066 14:46:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:10.066 14:46:03 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:10.066 14:46:03 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:10.066 14:46:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:10.066 14:46:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.066 14:46:03 -- common/autotest_common.sh@10 -- # set +x 00:05:10.066 ************************************ 00:05:10.066 START TEST nvmf_tcp 00:05:10.066 ************************************ 00:05:10.066 14:46:03 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:10.326 * Looking for test storage... 00:05:10.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.326 14:46:03 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.326 --rc genhtml_branch_coverage=1 00:05:10.326 --rc genhtml_function_coverage=1 00:05:10.326 --rc genhtml_legend=1 00:05:10.326 --rc geninfo_all_blocks=1 00:05:10.326 --rc geninfo_unexecuted_blocks=1 00:05:10.326 00:05:10.326 ' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.326 --rc genhtml_branch_coverage=1 00:05:10.326 --rc genhtml_function_coverage=1 00:05:10.326 --rc genhtml_legend=1 00:05:10.326 --rc geninfo_all_blocks=1 00:05:10.326 --rc geninfo_unexecuted_blocks=1 00:05:10.326 00:05:10.326 ' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.326 --rc genhtml_branch_coverage=1 00:05:10.326 --rc genhtml_function_coverage=1 00:05:10.326 --rc genhtml_legend=1 00:05:10.326 --rc geninfo_all_blocks=1 00:05:10.326 --rc geninfo_unexecuted_blocks=1 00:05:10.326 00:05:10.326 ' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.326 --rc genhtml_branch_coverage=1 00:05:10.326 --rc genhtml_function_coverage=1 00:05:10.326 --rc genhtml_legend=1 00:05:10.326 --rc geninfo_all_blocks=1 00:05:10.326 --rc geninfo_unexecuted_blocks=1 00:05:10.326 00:05:10.326 ' 00:05:10.326 14:46:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:10.326 14:46:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:10.326 14:46:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.326 14:46:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.326 ************************************ 00:05:10.326 START TEST nvmf_target_core 00:05:10.326 ************************************ 00:05:10.326 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:10.586 * Looking for test storage... 00:05:10.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.586 --rc genhtml_branch_coverage=1 00:05:10.586 --rc genhtml_function_coverage=1 00:05:10.586 --rc genhtml_legend=1 00:05:10.586 --rc geninfo_all_blocks=1 00:05:10.586 --rc geninfo_unexecuted_blocks=1 00:05:10.586 00:05:10.586 ' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.586 --rc genhtml_branch_coverage=1 00:05:10.586 --rc genhtml_function_coverage=1 00:05:10.586 --rc genhtml_legend=1 00:05:10.586 --rc geninfo_all_blocks=1 00:05:10.586 --rc geninfo_unexecuted_blocks=1 00:05:10.586 00:05:10.586 ' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.586 --rc genhtml_branch_coverage=1 00:05:10.586 --rc genhtml_function_coverage=1 00:05:10.586 --rc genhtml_legend=1 00:05:10.586 --rc geninfo_all_blocks=1 00:05:10.586 --rc geninfo_unexecuted_blocks=1 00:05:10.586 00:05:10.586 ' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.586 --rc genhtml_branch_coverage=1 00:05:10.586 --rc genhtml_function_coverage=1 00:05:10.586 --rc genhtml_legend=1 00:05:10.586 --rc geninfo_all_blocks=1 00:05:10.586 --rc geninfo_unexecuted_blocks=1 00:05:10.586 00:05:10.586 ' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.586 14:46:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:10.587 ************************************ 00:05:10.587 START TEST nvmf_abort 00:05:10.587 ************************************ 00:05:10.587 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:10.847 * Looking for test storage... 00:05:10.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.847 --rc genhtml_branch_coverage=1 00:05:10.847 --rc genhtml_function_coverage=1 00:05:10.847 --rc genhtml_legend=1 00:05:10.847 --rc geninfo_all_blocks=1 00:05:10.847 --rc geninfo_unexecuted_blocks=1 00:05:10.847 00:05:10.847 ' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.847 --rc genhtml_branch_coverage=1 00:05:10.847 --rc genhtml_function_coverage=1 00:05:10.847 --rc genhtml_legend=1 00:05:10.847 --rc geninfo_all_blocks=1 00:05:10.847 --rc geninfo_unexecuted_blocks=1 00:05:10.847 00:05:10.847 ' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.847 --rc genhtml_branch_coverage=1 00:05:10.847 --rc genhtml_function_coverage=1 00:05:10.847 --rc genhtml_legend=1 00:05:10.847 --rc geninfo_all_blocks=1 00:05:10.847 --rc geninfo_unexecuted_blocks=1 00:05:10.847 00:05:10.847 ' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.847 --rc genhtml_branch_coverage=1 00:05:10.847 --rc genhtml_function_coverage=1 00:05:10.847 --rc genhtml_legend=1 00:05:10.847 --rc geninfo_all_blocks=1 00:05:10.847 --rc geninfo_unexecuted_blocks=1 00:05:10.847 00:05:10.847 ' 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.847 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:10.848 14:46:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:17.421 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:17.422 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:17.422 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:17.422 Found net devices under 0000:86:00.0: cvl_0_0 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:17.422 Found net devices under 0000:86:00.1: cvl_0_1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:17.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:17.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:05:17.422 00:05:17.422 --- 10.0.0.2 ping statistics --- 00:05:17.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:17.422 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:17.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:17.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:05:17.422 00:05:17.422 --- 10.0.0.1 ping statistics --- 00:05:17.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:17.422 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:17.422 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2934286 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2934286 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2934286 ']' 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 [2024-12-11 14:46:09.675420] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:17.423 [2024-12-11 14:46:09.675468] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:17.423 [2024-12-11 14:46:09.756319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.423 [2024-12-11 14:46:09.799031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:17.423 [2024-12-11 14:46:09.799067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:17.423 [2024-12-11 14:46:09.799076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:17.423 [2024-12-11 14:46:09.799081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:17.423 [2024-12-11 14:46:09.799086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:17.423 [2024-12-11 14:46:09.800431] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.423 [2024-12-11 14:46:09.800454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.423 [2024-12-11 14:46:09.800457] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 [2024-12-11 14:46:09.938172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 Malloc0 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 Delay0 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 [2024-12-11 14:46:10.020748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.423 14:46:10 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:17.423 [2024-12-11 14:46:10.194296] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:19.327 Initializing NVMe Controllers 00:05:19.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:19.327 controller IO queue size 128 less than required 00:05:19.327 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:19.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:19.327 Initialization complete. Launching workers. 00:05:19.327 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36176 00:05:19.327 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36237, failed to submit 62 00:05:19.327 success 36180, unsuccessful 57, failed 0 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:19.327 rmmod nvme_tcp 00:05:19.327 rmmod nvme_fabrics 00:05:19.327 rmmod nvme_keyring 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2934286 ']' 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2934286 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2934286 ']' 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2934286 00:05:19.327 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2934286 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2934286' 00:05:19.585 killing process with pid 2934286 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2934286 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2934286 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:19.585 14:46:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:22.122 00:05:22.122 real 0m11.122s 00:05:22.122 user 0m11.810s 00:05:22.122 sys 0m5.398s 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:22.122 ************************************ 00:05:22.122 END TEST nvmf_abort 00:05:22.122 ************************************ 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:22.122 ************************************ 00:05:22.122 START TEST nvmf_ns_hotplug_stress 00:05:22.122 ************************************ 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:22.122 * Looking for test storage... 00:05:22.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.122 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.123 --rc genhtml_branch_coverage=1 00:05:22.123 --rc genhtml_function_coverage=1 00:05:22.123 --rc genhtml_legend=1 00:05:22.123 --rc geninfo_all_blocks=1 00:05:22.123 --rc geninfo_unexecuted_blocks=1 00:05:22.123 00:05:22.123 ' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.123 --rc genhtml_branch_coverage=1 00:05:22.123 --rc genhtml_function_coverage=1 00:05:22.123 --rc genhtml_legend=1 00:05:22.123 --rc geninfo_all_blocks=1 00:05:22.123 --rc geninfo_unexecuted_blocks=1 00:05:22.123 00:05:22.123 ' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.123 --rc genhtml_branch_coverage=1 00:05:22.123 --rc genhtml_function_coverage=1 00:05:22.123 --rc genhtml_legend=1 00:05:22.123 --rc geninfo_all_blocks=1 00:05:22.123 --rc geninfo_unexecuted_blocks=1 00:05:22.123 00:05:22.123 ' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.123 --rc genhtml_branch_coverage=1 00:05:22.123 --rc genhtml_function_coverage=1 00:05:22.123 --rc genhtml_legend=1 00:05:22.123 --rc geninfo_all_blocks=1 00:05:22.123 --rc geninfo_unexecuted_blocks=1 00:05:22.123 00:05:22.123 ' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:22.123 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:22.123 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:22.124 14:46:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:28.698 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:28.698 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:28.698 Found net devices under 0000:86:00.0: cvl_0_0 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:28.698 Found net devices under 0000:86:00.1: cvl_0_1 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:28.698 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:28.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:28.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:05:28.699 00:05:28.699 --- 10.0.0.2 ping statistics --- 00:05:28.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.699 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:28.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:28.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:05:28.699 00:05:28.699 --- 10.0.0.1 ping statistics --- 00:05:28.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:28.699 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:28.699 14:46:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2938317 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2938317 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2938317 ']' 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.699 [2024-12-11 14:46:21.079461] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:05:28.699 [2024-12-11 14:46:21.079508] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:28.699 [2024-12-11 14:46:21.157101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:28.699 [2024-12-11 14:46:21.196607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:28.699 [2024-12-11 14:46:21.196642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:28.699 [2024-12-11 14:46:21.196650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.699 [2024-12-11 14:46:21.196657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.699 [2024-12-11 14:46:21.196662] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:28.699 [2024-12-11 14:46:21.197943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.699 [2024-12-11 14:46:21.198054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.699 [2024-12-11 14:46:21.198055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:28.699 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.700 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:28.700 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:28.700 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:28.700 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:28.700 [2024-12-11 14:46:21.519414] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.700 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:28.958 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:28.958 [2024-12-11 14:46:21.920815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:28.958 14:46:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:29.217 14:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:29.476 Malloc0 00:05:29.476 14:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:29.734 Delay0 00:05:29.734 14:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.734 14:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:29.992 NULL1 00:05:29.992 14:46:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:30.251 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2938800 00:05:30.251 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:30.251 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:30.251 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.510 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.768 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:30.768 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:30.768 true 00:05:30.768 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:30.768 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.026 14:46:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.286 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:31.286 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:31.544 true 00:05:31.544 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:31.544 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:31.802 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:31.802 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:31.803 14:46:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:32.060 true 00:05:32.061 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:32.061 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.318 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.576 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:32.576 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:32.834 true 00:05:32.835 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:32.835 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.093 14:46:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.093 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:33.093 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:33.352 true 00:05:33.352 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:33.352 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:33.610 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:33.868 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:33.868 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:34.127 true 00:05:34.127 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:34.127 14:46:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.127 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:34.386 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:34.386 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:34.644 true 00:05:34.644 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:34.644 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.902 14:46:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.161 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:35.161 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:35.161 true 00:05:35.419 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:35.419 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.419 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:35.678 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:35.678 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:35.936 true 00:05:35.936 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:35.936 14:46:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.195 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.453 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:36.453 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:36.453 true 00:05:36.453 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:36.453 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.711 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.970 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:36.970 14:46:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:37.229 true 00:05:37.229 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:37.229 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.488 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.746 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:37.746 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:37.746 true 00:05:37.746 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:37.746 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.005 14:46:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.264 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:38.264 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:38.522 true 00:05:38.522 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:38.522 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.780 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.038 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:39.038 14:46:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:39.038 true 00:05:39.038 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:39.038 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:39.297 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:39.555 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:39.555 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:39.813 true 00:05:39.813 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:39.813 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.072 14:46:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.330 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:40.330 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:40.330 true 00:05:40.330 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:40.330 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.589 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.847 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:40.847 14:46:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:41.105 true 00:05:41.105 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:41.105 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.364 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.622 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:41.622 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:41.622 true 00:05:41.622 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:41.622 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.880 14:46:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.139 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:42.139 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:42.397 true 00:05:42.397 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:42.397 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.656 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.914 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:42.914 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:42.914 true 00:05:42.914 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:42.914 14:46:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.173 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.431 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:43.431 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:43.690 true 00:05:43.690 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:43.690 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.948 14:46:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.206 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:44.206 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:44.206 true 00:05:44.465 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:44.465 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.465 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.723 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:44.723 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:44.981 true 00:05:44.981 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:44.981 14:46:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.240 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.498 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:45.498 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:45.756 true 00:05:45.756 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:45.756 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.756 14:46:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.014 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:46.014 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:46.273 true 00:05:46.273 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:46.273 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.531 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.790 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:46.790 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:47.048 true 00:05:47.048 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:47.048 14:46:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.306 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.306 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:47.306 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:47.565 true 00:05:47.565 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:47.565 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.823 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.082 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:48.082 14:46:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:48.341 true 00:05:48.341 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:48.341 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.599 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.599 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:05:48.599 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:05:48.857 true 00:05:48.857 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:48.857 14:46:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.116 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.396 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:05:49.396 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:05:49.692 true 00:05:49.692 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:49.692 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.692 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.991 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:05:49.991 14:46:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:05:50.249 true 00:05:50.249 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:50.249 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.508 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.766 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:05:50.766 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:05:50.766 true 00:05:50.766 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:50.767 14:46:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.025 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.283 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:05:51.283 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:05:51.542 true 00:05:51.542 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:51.542 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.800 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.059 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:05:52.059 14:46:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:05:52.059 true 00:05:52.059 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:52.059 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.316 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.574 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:05:52.574 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:05:52.833 true 00:05:52.833 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:52.833 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.092 14:46:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.350 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:05:53.350 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:05:53.350 true 00:05:53.609 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:53.609 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.609 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.868 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:05:53.868 14:46:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:05:54.127 true 00:05:54.127 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:54.127 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.386 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.645 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:05:54.645 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:05:54.645 true 00:05:54.904 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:54.904 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.904 14:46:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.162 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:05:55.162 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:05:55.421 true 00:05:55.421 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:55.421 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.680 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.938 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:05:55.938 14:46:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:05:55.938 true 00:05:56.197 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:56.197 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.197 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.455 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:05:56.455 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:05:56.714 true 00:05:56.714 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:56.714 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.973 14:46:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.231 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:05:57.231 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:05:57.490 true 00:05:57.490 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:57.490 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.490 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.749 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:05:57.749 14:46:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:05:58.007 true 00:05:58.007 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:58.007 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.266 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.524 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:05:58.524 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:05:58.783 true 00:05:58.783 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:58.783 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.042 14:46:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.300 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:05:59.300 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:05:59.300 true 00:05:59.300 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:05:59.300 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.559 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.817 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:05:59.817 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:00.075 true 00:06:00.075 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:06:00.075 14:46:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.334 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.592 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:00.593 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:00.593 true 00:06:00.593 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:06:00.593 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.593 Initializing NVMe Controllers 00:06:00.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:00.593 Controller IO queue size 128, less than required. 00:06:00.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:00.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:00.593 Initialization complete. Launching workers. 00:06:00.593 ======================================================== 00:06:00.593 Latency(us) 00:06:00.593 Device Information : IOPS MiB/s Average min max 00:06:00.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26788.93 13.08 4777.78 2450.67 8764.12 00:06:00.593 ======================================================== 00:06:00.593 Total : 26788.93 13.08 4777.78 2450.67 8764.12 00:06:00.593 00:06:00.851 14:46:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.110 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:01.110 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:01.368 true 00:06:01.368 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2938800 00:06:01.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2938800) - No such process 00:06:01.368 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2938800 00:06:01.368 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.627 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:01.886 null0 00:06:01.886 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:01.886 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:01.886 14:46:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:02.144 null1 00:06:02.144 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.144 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.144 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:02.144 null2 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:02.402 null3 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.402 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:02.661 null4 00:06:02.661 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.661 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.661 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:02.920 null5 00:06:02.920 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:02.920 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:02.920 14:46:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:03.179 null6 00:06:03.179 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.179 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.179 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:03.179 null7 00:06:03.179 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:03.179 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2944473 2944474 2944476 2944479 2944480 2944482 2944484 2944486 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.439 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:03.699 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:03.958 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.958 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:03.958 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:03.959 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:03.959 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:03.959 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:03.959 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:03.959 14:46:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.218 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.478 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.737 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.737 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.737 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:04.737 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:04.738 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:04.998 14:46:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.257 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.516 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:05.776 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.035 14:46:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.035 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.295 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.554 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.555 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:06.813 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.813 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:06.814 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.073 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:46:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.073 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:07.333 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:07.592 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:07.593 rmmod nvme_tcp 00:06:07.593 rmmod nvme_fabrics 00:06:07.593 rmmod nvme_keyring 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2938317 ']' 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2938317 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2938317 ']' 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2938317 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938317 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938317' 00:06:07.593 killing process with pid 2938317 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2938317 00:06:07.593 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2938317 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:07.852 14:47:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:10.398 00:06:10.398 real 0m48.068s 00:06:10.398 user 3m24.384s 00:06:10.398 sys 0m17.397s 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:10.398 ************************************ 00:06:10.398 END TEST nvmf_ns_hotplug_stress 00:06:10.398 ************************************ 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.398 ************************************ 00:06:10.398 START TEST nvmf_delete_subsystem 00:06:10.398 ************************************ 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:10.398 * Looking for test storage... 00:06:10.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.398 14:47:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.398 --rc genhtml_branch_coverage=1 00:06:10.398 --rc genhtml_function_coverage=1 00:06:10.398 --rc genhtml_legend=1 00:06:10.398 --rc geninfo_all_blocks=1 00:06:10.398 --rc geninfo_unexecuted_blocks=1 00:06:10.398 00:06:10.398 ' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.398 --rc genhtml_branch_coverage=1 00:06:10.398 --rc genhtml_function_coverage=1 00:06:10.398 --rc genhtml_legend=1 00:06:10.398 --rc geninfo_all_blocks=1 00:06:10.398 --rc geninfo_unexecuted_blocks=1 00:06:10.398 00:06:10.398 ' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.398 --rc genhtml_branch_coverage=1 00:06:10.398 --rc genhtml_function_coverage=1 00:06:10.398 --rc genhtml_legend=1 00:06:10.398 --rc geninfo_all_blocks=1 00:06:10.398 --rc geninfo_unexecuted_blocks=1 00:06:10.398 00:06:10.398 ' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.398 --rc genhtml_branch_coverage=1 00:06:10.398 --rc genhtml_function_coverage=1 00:06:10.398 --rc genhtml_legend=1 00:06:10.398 --rc geninfo_all_blocks=1 00:06:10.398 --rc geninfo_unexecuted_blocks=1 00:06:10.398 00:06:10.398 ' 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.398 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:10.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:10.399 14:47:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:16.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:16.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:16.971 Found net devices under 0000:86:00.0: cvl_0_0 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:16.971 Found net devices under 0000:86:00.1: cvl_0_1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:16.971 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:16.972 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:16.972 14:47:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:16.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:16.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:06:16.972 00:06:16.972 --- 10.0.0.2 ping statistics --- 00:06:16.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.972 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:16.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:16.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:06:16.972 00:06:16.972 --- 10.0.0.1 ping statistics --- 00:06:16.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:16.972 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2948866 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2948866 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2948866 ']' 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 [2024-12-11 14:47:09.262612] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:16.972 [2024-12-11 14:47:09.262662] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:16.972 [2024-12-11 14:47:09.346588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.972 [2024-12-11 14:47:09.386365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:16.972 [2024-12-11 14:47:09.386402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:16.972 [2024-12-11 14:47:09.386411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:16.972 [2024-12-11 14:47:09.386421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:16.972 [2024-12-11 14:47:09.386426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:16.972 [2024-12-11 14:47:09.387646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.972 [2024-12-11 14:47:09.387647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 [2024-12-11 14:47:09.533368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 [2024-12-11 14:47:09.553573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 NULL1 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 Delay0 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2949079 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:16.972 14:47:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:16.972 [2024-12-11 14:47:09.665376] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:18.876 14:47:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:18.876 14:47:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.876 14:47:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 starting I/O failed: -6 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 [2024-12-11 14:47:11.780458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe74a0 is same with the state(6) to be set 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Write completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.876 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 [2024-12-11 14:47:11.781660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe7680 is same with the state(6) to be set 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 starting I/O failed: -6 00:06:18.877 [2024-12-11 14:47:11.785872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1130000c80 is same with the state(6) to be set 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Write completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:18.877 Read completed with error (sct=0, sc=8) 00:06:19.812 [2024-12-11 14:47:12.761125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe89b0 is same with the state(6) to be set 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 [2024-12-11 14:47:12.783625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe72c0 is same with the state(6) to be set 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 [2024-12-11 14:47:12.783970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe7860 is same with the state(6) to be set 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 [2024-12-11 14:47:12.787939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f113000d060 is same with the state(6) to be set 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Write completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 Read completed with error (sct=0, sc=8) 00:06:19.812 [2024-12-11 14:47:12.788616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f113000d6c0 is same with the state(6) to be set 00:06:19.812 Initializing NVMe Controllers 00:06:19.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:19.812 Controller IO queue size 128, less than required. 00:06:19.812 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:19.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:19.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:19.812 Initialization complete. Launching workers. 00:06:19.812 ======================================================== 00:06:19.812 Latency(us) 00:06:19.812 Device Information : IOPS MiB/s Average min max 00:06:19.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.31 0.08 908548.84 1203.70 1006066.62 00:06:19.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.32 0.08 912519.64 261.35 1011048.07 00:06:19.812 ======================================================== 00:06:19.812 Total : 325.63 0.16 910528.17 261.35 1011048.07 00:06:19.812 00:06:19.812 [2024-12-11 14:47:12.789203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe89b0 (9): Bad file descriptor 00:06:19.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:19.812 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.812 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:19.812 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2949079 00:06:19.812 14:47:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2949079 00:06:20.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2949079) - No such process 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2949079 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2949079 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2949079 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.380 [2024-12-11 14:47:13.314070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2949585 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:20.380 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:20.380 [2024-12-11 14:47:13.407336] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:20.951 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:20.951 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:20.951 14:47:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:21.518 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:21.518 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:21.518 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.084 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.084 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:22.084 14:47:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.342 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.342 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:22.342 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:22.909 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:22.909 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:22.909 14:47:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.476 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.476 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:23.476 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:23.735 Initializing NVMe Controllers 00:06:23.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:23.735 Controller IO queue size 128, less than required. 00:06:23.735 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:23.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:23.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:23.735 Initialization complete. Launching workers. 00:06:23.735 ======================================================== 00:06:23.735 Latency(us) 00:06:23.735 Device Information : IOPS MiB/s Average min max 00:06:23.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003159.02 1000145.51 1042136.51 00:06:23.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004456.22 1000354.89 1042058.49 00:06:23.735 ======================================================== 00:06:23.735 Total : 256.00 0.12 1003807.62 1000145.51 1042136.51 00:06:23.735 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2949585 00:06:23.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2949585) - No such process 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2949585 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.995 rmmod nvme_tcp 00:06:23.995 rmmod nvme_fabrics 00:06:23.995 rmmod nvme_keyring 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2948866 ']' 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2948866 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2948866 ']' 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2948866 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2948866 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2948866' 00:06:23.995 killing process with pid 2948866 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2948866 00:06:23.995 14:47:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2948866 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.255 14:47:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.161 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.420 00:06:26.421 real 0m16.313s 00:06:26.421 user 0m29.179s 00:06:26.421 sys 0m5.587s 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:26.421 ************************************ 00:06:26.421 END TEST nvmf_delete_subsystem 00:06:26.421 ************************************ 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.421 ************************************ 00:06:26.421 START TEST nvmf_host_management 00:06:26.421 ************************************ 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:26.421 * Looking for test storage... 00:06:26.421 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.421 --rc genhtml_branch_coverage=1 00:06:26.421 --rc genhtml_function_coverage=1 00:06:26.421 --rc genhtml_legend=1 00:06:26.421 --rc geninfo_all_blocks=1 00:06:26.421 --rc geninfo_unexecuted_blocks=1 00:06:26.421 00:06:26.421 ' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.421 --rc genhtml_branch_coverage=1 00:06:26.421 --rc genhtml_function_coverage=1 00:06:26.421 --rc genhtml_legend=1 00:06:26.421 --rc geninfo_all_blocks=1 00:06:26.421 --rc geninfo_unexecuted_blocks=1 00:06:26.421 00:06:26.421 ' 00:06:26.421 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.421 --rc genhtml_branch_coverage=1 00:06:26.421 --rc genhtml_function_coverage=1 00:06:26.421 --rc genhtml_legend=1 00:06:26.421 --rc geninfo_all_blocks=1 00:06:26.421 --rc geninfo_unexecuted_blocks=1 00:06:26.421 00:06:26.421 ' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.682 --rc genhtml_branch_coverage=1 00:06:26.682 --rc genhtml_function_coverage=1 00:06:26.682 --rc genhtml_legend=1 00:06:26.682 --rc geninfo_all_blocks=1 00:06:26.682 --rc geninfo_unexecuted_blocks=1 00:06:26.682 00:06:26.682 ' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.682 14:47:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:33.257 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.257 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:33.258 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:33.258 Found net devices under 0000:86:00.0: cvl_0_0 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:33.258 Found net devices under 0000:86:00.1: cvl_0_1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.258 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.258 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:06:33.258 00:06:33.258 --- 10.0.0.2 ping statistics --- 00:06:33.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.258 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.258 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.258 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:06:33.258 00:06:33.258 --- 10.0.0.1 ping statistics --- 00:06:33.258 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.258 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2953808 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2953808 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2953808 ']' 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.258 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.258 [2024-12-11 14:47:25.562246] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:33.258 [2024-12-11 14:47:25.562298] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.258 [2024-12-11 14:47:25.642826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.259 [2024-12-11 14:47:25.685926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.259 [2024-12-11 14:47:25.685964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.259 [2024-12-11 14:47:25.685972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.259 [2024-12-11 14:47:25.685978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.259 [2024-12-11 14:47:25.685984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.259 [2024-12-11 14:47:25.687491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.259 [2024-12-11 14:47:25.687603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.259 [2024-12-11 14:47:25.687709] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.259 [2024-12-11 14:47:25.687710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 [2024-12-11 14:47:25.833224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 Malloc0 00:06:33.259 [2024-12-11 14:47:25.902193] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2953868 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2953868 /var/tmp/bdevperf.sock 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2953868 ']' 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:33.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:33.259 { 00:06:33.259 "params": { 00:06:33.259 "name": "Nvme$subsystem", 00:06:33.259 "trtype": "$TEST_TRANSPORT", 00:06:33.259 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:33.259 "adrfam": "ipv4", 00:06:33.259 "trsvcid": "$NVMF_PORT", 00:06:33.259 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:33.259 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:33.259 "hdgst": ${hdgst:-false}, 00:06:33.259 "ddgst": ${ddgst:-false} 00:06:33.259 }, 00:06:33.259 "method": "bdev_nvme_attach_controller" 00:06:33.259 } 00:06:33.259 EOF 00:06:33.259 )") 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:33.259 14:47:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:33.259 "params": { 00:06:33.259 "name": "Nvme0", 00:06:33.259 "trtype": "tcp", 00:06:33.259 "traddr": "10.0.0.2", 00:06:33.259 "adrfam": "ipv4", 00:06:33.259 "trsvcid": "4420", 00:06:33.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:33.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:33.259 "hdgst": false, 00:06:33.259 "ddgst": false 00:06:33.259 }, 00:06:33.259 "method": "bdev_nvme_attach_controller" 00:06:33.259 }' 00:06:33.259 [2024-12-11 14:47:25.997629] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:33.259 [2024-12-11 14:47:25.997674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2953868 ] 00:06:33.259 [2024-12-11 14:47:26.075128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.259 [2024-12-11 14:47:26.115737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.522 Running I/O for 10 seconds... 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=105 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 105 -ge 100 ']' 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.522 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.523 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.523 [2024-12-11 14:47:26.415846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.415995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.523 [2024-12-11 14:47:26.416198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 [2024-12-11 14:47:26.416271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220ded0 is same with the state(6) to be set 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:33.524 [2024-12-11 14:47:26.424096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.524 [2024-12-11 14:47:26.424129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.424139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.524 [2024-12-11 14:47:26.424146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.424153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.524 [2024-12-11 14:47:26.424165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.424173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:33.524 [2024-12-11 14:47:26.424179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.424186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15671a0 is same with the state(6) to be set 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.524 14:47:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:33.524 [2024-12-11 14:47:26.430528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.524 [2024-12-11 14:47:26.430751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.524 [2024-12-11 14:47:26.430759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.430992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.430998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.525 [2024-12-11 14:47:26.431155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.525 [2024-12-11 14:47:26.431167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.431589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:33.526 [2024-12-11 14:47:26.431596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:33.526 [2024-12-11 14:47:26.432578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:33.526 task offset: 24320 on job bdev=Nvme0n1 fails 00:06:33.526 00:06:33.526 Latency(us) 00:06:33.526 [2024-12-11T13:47:26.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.526 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:33.526 Job: Nvme0n1 ended in about 0.11 seconds with error 00:06:33.526 Verification LBA range: start 0x0 length 0x400 00:06:33.526 Nvme0n1 : 0.11 1703.64 106.48 573.86 0.00 25895.30 1403.33 28151.99 00:06:33.526 [2024-12-11T13:47:26.574Z] =================================================================================================================== 00:06:33.526 [2024-12-11T13:47:26.575Z] Total : 1703.64 106.48 573.86 0.00 25895.30 1403.33 28151.99 00:06:33.527 [2024-12-11 14:47:26.434978] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:33.527 [2024-12-11 14:47:26.434999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15671a0 (9): Bad file descriptor 00:06:33.527 [2024-12-11 14:47:26.445924] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2953868 00:06:34.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2953868) - No such process 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:34.464 { 00:06:34.464 "params": { 00:06:34.464 "name": "Nvme$subsystem", 00:06:34.464 "trtype": "$TEST_TRANSPORT", 00:06:34.464 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.464 "adrfam": "ipv4", 00:06:34.464 "trsvcid": "$NVMF_PORT", 00:06:34.464 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.464 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.464 "hdgst": ${hdgst:-false}, 00:06:34.464 "ddgst": ${ddgst:-false} 00:06:34.464 }, 00:06:34.464 "method": "bdev_nvme_attach_controller" 00:06:34.464 } 00:06:34.464 EOF 00:06:34.464 )") 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:34.464 14:47:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:34.464 "params": { 00:06:34.464 "name": "Nvme0", 00:06:34.464 "trtype": "tcp", 00:06:34.464 "traddr": "10.0.0.2", 00:06:34.464 "adrfam": "ipv4", 00:06:34.464 "trsvcid": "4420", 00:06:34.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.464 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.464 "hdgst": false, 00:06:34.464 "ddgst": false 00:06:34.464 }, 00:06:34.464 "method": "bdev_nvme_attach_controller" 00:06:34.464 }' 00:06:34.464 [2024-12-11 14:47:27.484342] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:34.464 [2024-12-11 14:47:27.484390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2954155 ] 00:06:34.724 [2024-12-11 14:47:27.560802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.724 [2024-12-11 14:47:27.600028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.724 Running I/O for 1 seconds... 00:06:36.102 1984.00 IOPS, 124.00 MiB/s 00:06:36.102 Latency(us) 00:06:36.102 [2024-12-11T13:47:29.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:36.102 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:36.102 Verification LBA range: start 0x0 length 0x400 00:06:36.102 Nvme0n1 : 1.02 2003.09 125.19 0.00 0.00 31446.06 4786.98 27582.11 00:06:36.102 [2024-12-11T13:47:29.150Z] =================================================================================================================== 00:06:36.102 [2024-12-11T13:47:29.150Z] Total : 2003.09 125.19 0.00 0.00 31446.06 4786.98 27582.11 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:36.102 14:47:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:36.102 rmmod nvme_tcp 00:06:36.102 rmmod nvme_fabrics 00:06:36.102 rmmod nvme_keyring 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2953808 ']' 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2953808 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2953808 ']' 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2953808 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953808 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953808' 00:06:36.102 killing process with pid 2953808 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2953808 00:06:36.102 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2953808 00:06:36.365 [2024-12-11 14:47:29.248390] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.365 14:47:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:38.354 00:06:38.354 real 0m12.058s 00:06:38.354 user 0m17.908s 00:06:38.354 sys 0m5.507s 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.354 ************************************ 00:06:38.354 END TEST nvmf_host_management 00:06:38.354 ************************************ 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.354 14:47:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.614 ************************************ 00:06:38.614 START TEST nvmf_lvol 00:06:38.614 ************************************ 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:38.614 * Looking for test storage... 00:06:38.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:38.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.614 --rc genhtml_branch_coverage=1 00:06:38.614 --rc genhtml_function_coverage=1 00:06:38.614 --rc genhtml_legend=1 00:06:38.614 --rc geninfo_all_blocks=1 00:06:38.614 --rc geninfo_unexecuted_blocks=1 00:06:38.614 00:06:38.614 ' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.614 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:38.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:38.615 14:47:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.189 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.189 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.190 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.190 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.190 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:45.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:06:45.190 00:06:45.190 --- 10.0.0.2 ping statistics --- 00:06:45.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.190 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:06:45.190 00:06:45.190 --- 10.0.0.1 ping statistics --- 00:06:45.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.190 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2958096 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2958096 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2958096 ']' 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.190 [2024-12-11 14:47:37.675487] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:06:45.190 [2024-12-11 14:47:37.675537] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.190 [2024-12-11 14:47:37.756553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:45.190 [2024-12-11 14:47:37.798153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:45.190 [2024-12-11 14:47:37.798193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:45.190 [2024-12-11 14:47:37.798201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:45.190 [2024-12-11 14:47:37.798207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:45.190 [2024-12-11 14:47:37.798213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:45.190 [2024-12-11 14:47:37.799604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.190 [2024-12-11 14:47:37.799711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.190 [2024-12-11 14:47:37.799711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.190 14:47:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:45.190 [2024-12-11 14:47:38.102248] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.190 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.450 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:45.450 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:45.709 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:45.709 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:45.968 14:47:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:45.968 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=17bd0bc5-215a-47b6-9016-59847cfd7128 00:06:45.968 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 17bd0bc5-215a-47b6-9016-59847cfd7128 lvol 20 00:06:46.227 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7304c335-4442-40b2-8dc2-4f93b9c231ea 00:06:46.227 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:46.486 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7304c335-4442-40b2-8dc2-4f93b9c231ea 00:06:46.745 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:46.745 [2024-12-11 14:47:39.776285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.004 14:47:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.004 14:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2958451 00:06:47.004 14:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:47.004 14:47:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:48.383 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot 7304c335-4442-40b2-8dc2-4f93b9c231ea MY_SNAPSHOT 00:06:48.383 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=3f9e031d-007d-48ee-bd35-36a57f93714b 00:06:48.383 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize 7304c335-4442-40b2-8dc2-4f93b9c231ea 30 00:06:48.642 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 3f9e031d-007d-48ee-bd35-36a57f93714b MY_CLONE 00:06:48.900 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=89232487-9884-4256-8723-ebb9613a5f86 00:06:48.900 14:47:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 89232487-9884-4256-8723-ebb9613a5f86 00:06:49.469 14:47:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2958451 00:06:57.593 Initializing NVMe Controllers 00:06:57.593 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:57.593 Controller IO queue size 128, less than required. 00:06:57.593 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:57.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:57.593 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:57.593 Initialization complete. Launching workers. 00:06:57.593 ======================================================== 00:06:57.593 Latency(us) 00:06:57.593 Device Information : IOPS MiB/s Average min max 00:06:57.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11941.00 46.64 10721.74 1871.19 59531.92 00:06:57.593 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11817.60 46.16 10835.11 3245.29 50816.29 00:06:57.593 ======================================================== 00:06:57.593 Total : 23758.60 92.81 10778.13 1871.19 59531.92 00:06:57.593 00:06:57.593 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:57.593 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 7304c335-4442-40b2-8dc2-4f93b9c231ea 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 17bd0bc5-215a-47b6-9016-59847cfd7128 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.851 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.851 rmmod nvme_tcp 00:06:57.851 rmmod nvme_fabrics 00:06:58.110 rmmod nvme_keyring 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2958096 ']' 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2958096 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2958096 ']' 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2958096 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958096 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958096' 00:06:58.110 killing process with pid 2958096 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2958096 00:06:58.110 14:47:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2958096 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:58.370 14:47:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.276 00:07:00.276 real 0m21.842s 00:07:00.276 user 1m2.899s 00:07:00.276 sys 0m7.491s 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.276 ************************************ 00:07:00.276 END TEST nvmf_lvol 00:07:00.276 ************************************ 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.276 14:47:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.536 ************************************ 00:07:00.536 START TEST nvmf_lvs_grow 00:07:00.536 ************************************ 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.536 * Looking for test storage... 00:07:00.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.536 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.537 --rc genhtml_branch_coverage=1 00:07:00.537 --rc genhtml_function_coverage=1 00:07:00.537 --rc genhtml_legend=1 00:07:00.537 --rc geninfo_all_blocks=1 00:07:00.537 --rc geninfo_unexecuted_blocks=1 00:07:00.537 00:07:00.537 ' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.537 --rc genhtml_branch_coverage=1 00:07:00.537 --rc genhtml_function_coverage=1 00:07:00.537 --rc genhtml_legend=1 00:07:00.537 --rc geninfo_all_blocks=1 00:07:00.537 --rc geninfo_unexecuted_blocks=1 00:07:00.537 00:07:00.537 ' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.537 --rc genhtml_branch_coverage=1 00:07:00.537 --rc genhtml_function_coverage=1 00:07:00.537 --rc genhtml_legend=1 00:07:00.537 --rc geninfo_all_blocks=1 00:07:00.537 --rc geninfo_unexecuted_blocks=1 00:07:00.537 00:07:00.537 ' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.537 --rc genhtml_branch_coverage=1 00:07:00.537 --rc genhtml_function_coverage=1 00:07:00.537 --rc genhtml_legend=1 00:07:00.537 --rc geninfo_all_blocks=1 00:07:00.537 --rc geninfo_unexecuted_blocks=1 00:07:00.537 00:07:00.537 ' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.537 14:47:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.111 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:07.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:07.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:07.112 Found net devices under 0000:86:00.0: cvl_0_0 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:07.112 Found net devices under 0000:86:00.1: cvl_0_1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:07:07.112 00:07:07.112 --- 10.0.0.2 ping statistics --- 00:07:07.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.112 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:07.112 00:07:07.112 --- 10.0.0.1 ping statistics --- 00:07:07.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.112 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.112 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2963887 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2963887 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2963887 ']' 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.113 [2024-12-11 14:47:59.648150] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:07.113 [2024-12-11 14:47:59.648208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.113 [2024-12-11 14:47:59.729980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.113 [2024-12-11 14:47:59.770690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.113 [2024-12-11 14:47:59.770725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.113 [2024-12-11 14:47:59.770732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.113 [2024-12-11 14:47:59.770738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.113 [2024-12-11 14:47:59.770744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.113 [2024-12-11 14:47:59.771255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.113 14:47:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.113 [2024-12-11 14:48:00.075143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:07.113 ************************************ 00:07:07.113 START TEST lvs_grow_clean 00:07:07.113 ************************************ 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:07.113 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:07.372 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:07.372 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.631 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:07.631 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:07.631 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.890 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.890 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.890 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u f098eb8c-ff75-4877-b581-cf9a800ede69 lvol 150 00:07:08.148 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e5194ecd-fe6e-489f-9980-3aad373bbf1d 00:07:08.148 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:08.148 14:48:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.148 [2024-12-11 14:48:01.155788] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.148 [2024-12-11 14:48:01.155837] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.148 true 00:07:08.148 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:08.148 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.407 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.407 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.665 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e5194ecd-fe6e-489f-9980-3aad373bbf1d 00:07:08.925 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.925 [2024-12-11 14:48:01.898008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.925 14:48:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2964387 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2964387 /var/tmp/bdevperf.sock 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2964387 ']' 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.184 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.184 [2024-12-11 14:48:02.127180] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:09.184 [2024-12-11 14:48:02.127230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2964387 ] 00:07:09.184 [2024-12-11 14:48:02.203235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.444 [2024-12-11 14:48:02.246272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.444 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.444 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:09.444 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:09.703 Nvme0n1 00:07:09.703 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:09.962 [ 00:07:09.962 { 00:07:09.962 "name": "Nvme0n1", 00:07:09.962 "aliases": [ 00:07:09.962 "e5194ecd-fe6e-489f-9980-3aad373bbf1d" 00:07:09.962 ], 00:07:09.962 "product_name": "NVMe disk", 00:07:09.962 "block_size": 4096, 00:07:09.962 "num_blocks": 38912, 00:07:09.962 "uuid": "e5194ecd-fe6e-489f-9980-3aad373bbf1d", 00:07:09.962 "numa_id": 1, 00:07:09.962 "assigned_rate_limits": { 00:07:09.962 "rw_ios_per_sec": 0, 00:07:09.962 "rw_mbytes_per_sec": 0, 00:07:09.962 "r_mbytes_per_sec": 0, 00:07:09.962 "w_mbytes_per_sec": 0 00:07:09.962 }, 00:07:09.962 "claimed": false, 00:07:09.962 "zoned": false, 00:07:09.962 "supported_io_types": { 00:07:09.962 "read": true, 00:07:09.962 "write": true, 00:07:09.962 "unmap": true, 00:07:09.962 "flush": true, 00:07:09.962 "reset": true, 00:07:09.962 "nvme_admin": true, 00:07:09.962 "nvme_io": true, 00:07:09.962 "nvme_io_md": false, 00:07:09.962 "write_zeroes": true, 00:07:09.962 "zcopy": false, 00:07:09.962 "get_zone_info": false, 00:07:09.962 "zone_management": false, 00:07:09.962 "zone_append": false, 00:07:09.962 "compare": true, 00:07:09.962 "compare_and_write": true, 00:07:09.962 "abort": true, 00:07:09.962 "seek_hole": false, 00:07:09.962 "seek_data": false, 00:07:09.962 "copy": true, 00:07:09.962 "nvme_iov_md": false 00:07:09.962 }, 00:07:09.962 "memory_domains": [ 00:07:09.962 { 00:07:09.962 "dma_device_id": "system", 00:07:09.962 "dma_device_type": 1 00:07:09.962 } 00:07:09.962 ], 00:07:09.962 "driver_specific": { 00:07:09.962 "nvme": [ 00:07:09.962 { 00:07:09.962 "trid": { 00:07:09.962 "trtype": "TCP", 00:07:09.962 "adrfam": "IPv4", 00:07:09.962 "traddr": "10.0.0.2", 00:07:09.962 "trsvcid": "4420", 00:07:09.962 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:09.962 }, 00:07:09.962 "ctrlr_data": { 00:07:09.962 "cntlid": 1, 00:07:09.962 "vendor_id": "0x8086", 00:07:09.962 "model_number": "SPDK bdev Controller", 00:07:09.962 "serial_number": "SPDK0", 00:07:09.962 "firmware_revision": "25.01", 00:07:09.962 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.962 "oacs": { 00:07:09.962 "security": 0, 00:07:09.962 "format": 0, 00:07:09.962 "firmware": 0, 00:07:09.962 "ns_manage": 0 00:07:09.962 }, 00:07:09.962 "multi_ctrlr": true, 00:07:09.962 "ana_reporting": false 00:07:09.962 }, 00:07:09.962 "vs": { 00:07:09.962 "nvme_version": "1.3" 00:07:09.962 }, 00:07:09.962 "ns_data": { 00:07:09.962 "id": 1, 00:07:09.962 "can_share": true 00:07:09.962 } 00:07:09.962 } 00:07:09.962 ], 00:07:09.962 "mp_policy": "active_passive" 00:07:09.962 } 00:07:09.962 } 00:07:09.962 ] 00:07:09.962 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2964617 00:07:09.962 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:09.962 14:48:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:09.962 Running I/O for 10 seconds... 00:07:10.900 Latency(us) 00:07:10.900 [2024-12-11T13:48:03.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.900 Nvme0n1 : 1.00 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:10.900 [2024-12-11T13:48:03.948Z] =================================================================================================================== 00:07:10.900 [2024-12-11T13:48:03.948Z] Total : 22737.00 88.82 0.00 0.00 0.00 0.00 0.00 00:07:10.900 00:07:11.840 14:48:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:12.099 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.099 Nvme0n1 : 2.00 22904.50 89.47 0.00 0.00 0.00 0.00 0.00 00:07:12.099 [2024-12-11T13:48:05.147Z] =================================================================================================================== 00:07:12.099 [2024-12-11T13:48:05.147Z] Total : 22904.50 89.47 0.00 0.00 0.00 0.00 0.00 00:07:12.099 00:07:12.099 true 00:07:12.099 14:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:12.099 14:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:12.356 14:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:12.356 14:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:12.356 14:48:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2964617 00:07:12.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.923 Nvme0n1 : 3.00 22970.67 89.73 0.00 0.00 0.00 0.00 0.00 00:07:12.923 [2024-12-11T13:48:05.971Z] =================================================================================================================== 00:07:12.923 [2024-12-11T13:48:05.971Z] Total : 22970.67 89.73 0.00 0.00 0.00 0.00 0.00 00:07:12.923 00:07:14.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.300 Nvme0n1 : 4.00 23040.50 90.00 0.00 0.00 0.00 0.00 0.00 00:07:14.300 [2024-12-11T13:48:07.348Z] =================================================================================================================== 00:07:14.300 [2024-12-11T13:48:07.348Z] Total : 23040.50 90.00 0.00 0.00 0.00 0.00 0.00 00:07:14.300 00:07:15.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.237 Nvme0n1 : 5.00 23094.60 90.21 0.00 0.00 0.00 0.00 0.00 00:07:15.237 [2024-12-11T13:48:08.285Z] =================================================================================================================== 00:07:15.237 [2024-12-11T13:48:08.285Z] Total : 23094.60 90.21 0.00 0.00 0.00 0.00 0.00 00:07:15.237 00:07:16.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.174 Nvme0n1 : 6.00 23132.50 90.36 0.00 0.00 0.00 0.00 0.00 00:07:16.174 [2024-12-11T13:48:09.222Z] =================================================================================================================== 00:07:16.174 [2024-12-11T13:48:09.222Z] Total : 23132.50 90.36 0.00 0.00 0.00 0.00 0.00 00:07:16.174 00:07:17.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.111 Nvme0n1 : 7.00 23159.86 90.47 0.00 0.00 0.00 0.00 0.00 00:07:17.111 [2024-12-11T13:48:10.159Z] =================================================================================================================== 00:07:17.111 [2024-12-11T13:48:10.159Z] Total : 23159.86 90.47 0.00 0.00 0.00 0.00 0.00 00:07:17.111 00:07:18.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.048 Nvme0n1 : 8.00 23157.00 90.46 0.00 0.00 0.00 0.00 0.00 00:07:18.048 [2024-12-11T13:48:11.096Z] =================================================================================================================== 00:07:18.048 [2024-12-11T13:48:11.096Z] Total : 23157.00 90.46 0.00 0.00 0.00 0.00 0.00 00:07:18.048 00:07:18.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.986 Nvme0n1 : 9.00 23181.67 90.55 0.00 0.00 0.00 0.00 0.00 00:07:18.986 [2024-12-11T13:48:12.034Z] =================================================================================================================== 00:07:18.986 [2024-12-11T13:48:12.034Z] Total : 23181.67 90.55 0.00 0.00 0.00 0.00 0.00 00:07:18.986 00:07:19.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.923 Nvme0n1 : 10.00 23197.80 90.62 0.00 0.00 0.00 0.00 0.00 00:07:19.923 [2024-12-11T13:48:12.971Z] =================================================================================================================== 00:07:19.923 [2024-12-11T13:48:12.971Z] Total : 23197.80 90.62 0.00 0.00 0.00 0.00 0.00 00:07:19.923 00:07:19.923 00:07:19.923 Latency(us) 00:07:19.923 [2024-12-11T13:48:12.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.923 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.923 Nvme0n1 : 10.00 23202.94 90.64 0.00 0.00 5513.32 3219.81 15728.64 00:07:19.923 [2024-12-11T13:48:12.971Z] =================================================================================================================== 00:07:19.923 [2024-12-11T13:48:12.971Z] Total : 23202.94 90.64 0.00 0.00 5513.32 3219.81 15728.64 00:07:19.923 { 00:07:19.923 "results": [ 00:07:19.923 { 00:07:19.923 "job": "Nvme0n1", 00:07:19.923 "core_mask": "0x2", 00:07:19.923 "workload": "randwrite", 00:07:19.923 "status": "finished", 00:07:19.923 "queue_depth": 128, 00:07:19.923 "io_size": 4096, 00:07:19.923 "runtime": 10.003302, 00:07:19.923 "iops": 23202.938389743707, 00:07:19.923 "mibps": 90.63647808493636, 00:07:19.923 "io_failed": 0, 00:07:19.923 "io_timeout": 0, 00:07:19.923 "avg_latency_us": 5513.324224351768, 00:07:19.923 "min_latency_us": 3219.8121739130434, 00:07:19.923 "max_latency_us": 15728.64 00:07:19.923 } 00:07:19.923 ], 00:07:19.924 "core_count": 1 00:07:19.924 } 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2964387 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2964387 ']' 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2964387 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.182 14:48:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2964387 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2964387' 00:07:20.182 killing process with pid 2964387 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2964387 00:07:20.182 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.182 00:07:20.182 Latency(us) 00:07:20.182 [2024-12-11T13:48:13.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.182 [2024-12-11T13:48:13.230Z] =================================================================================================================== 00:07:20.182 [2024-12-11T13:48:13.230Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2964387 00:07:20.182 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.440 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:20.699 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:20.699 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:20.957 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:20.957 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:20.957 14:48:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:21.217 [2024-12-11 14:48:14.023911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:21.217 request: 00:07:21.217 { 00:07:21.217 "uuid": "f098eb8c-ff75-4877-b581-cf9a800ede69", 00:07:21.217 "method": "bdev_lvol_get_lvstores", 00:07:21.217 "req_id": 1 00:07:21.217 } 00:07:21.217 Got JSON-RPC error response 00:07:21.217 response: 00:07:21.217 { 00:07:21.217 "code": -19, 00:07:21.217 "message": "No such device" 00:07:21.217 } 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:21.217 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.476 aio_bdev 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e5194ecd-fe6e-489f-9980-3aad373bbf1d 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e5194ecd-fe6e-489f-9980-3aad373bbf1d 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.476 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:21.735 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b e5194ecd-fe6e-489f-9980-3aad373bbf1d -t 2000 00:07:21.994 [ 00:07:21.994 { 00:07:21.994 "name": "e5194ecd-fe6e-489f-9980-3aad373bbf1d", 00:07:21.994 "aliases": [ 00:07:21.994 "lvs/lvol" 00:07:21.994 ], 00:07:21.994 "product_name": "Logical Volume", 00:07:21.994 "block_size": 4096, 00:07:21.994 "num_blocks": 38912, 00:07:21.994 "uuid": "e5194ecd-fe6e-489f-9980-3aad373bbf1d", 00:07:21.994 "assigned_rate_limits": { 00:07:21.994 "rw_ios_per_sec": 0, 00:07:21.994 "rw_mbytes_per_sec": 0, 00:07:21.994 "r_mbytes_per_sec": 0, 00:07:21.994 "w_mbytes_per_sec": 0 00:07:21.994 }, 00:07:21.994 "claimed": false, 00:07:21.994 "zoned": false, 00:07:21.994 "supported_io_types": { 00:07:21.994 "read": true, 00:07:21.994 "write": true, 00:07:21.994 "unmap": true, 00:07:21.994 "flush": false, 00:07:21.994 "reset": true, 00:07:21.994 "nvme_admin": false, 00:07:21.994 "nvme_io": false, 00:07:21.994 "nvme_io_md": false, 00:07:21.994 "write_zeroes": true, 00:07:21.994 "zcopy": false, 00:07:21.994 "get_zone_info": false, 00:07:21.994 "zone_management": false, 00:07:21.994 "zone_append": false, 00:07:21.994 "compare": false, 00:07:21.994 "compare_and_write": false, 00:07:21.994 "abort": false, 00:07:21.994 "seek_hole": true, 00:07:21.994 "seek_data": true, 00:07:21.994 "copy": false, 00:07:21.994 "nvme_iov_md": false 00:07:21.994 }, 00:07:21.994 "driver_specific": { 00:07:21.994 "lvol": { 00:07:21.994 "lvol_store_uuid": "f098eb8c-ff75-4877-b581-cf9a800ede69", 00:07:21.994 "base_bdev": "aio_bdev", 00:07:21.994 "thin_provision": false, 00:07:21.994 "num_allocated_clusters": 38, 00:07:21.994 "snapshot": false, 00:07:21.994 "clone": false, 00:07:21.994 "esnap_clone": false 00:07:21.994 } 00:07:21.994 } 00:07:21.994 } 00:07:21.994 ] 00:07:21.994 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:21.995 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:21.995 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:21.995 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:21.995 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:21.995 14:48:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:22.257 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:22.257 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete e5194ecd-fe6e-489f-9980-3aad373bbf1d 00:07:22.524 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f098eb8c-ff75-4877-b581-cf9a800ede69 00:07:22.783 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.783 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:22.783 00:07:22.783 real 0m15.685s 00:07:22.783 user 0m15.289s 00:07:22.783 sys 0m1.438s 00:07:22.783 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.783 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:22.783 ************************************ 00:07:22.783 END TEST lvs_grow_clean 00:07:22.783 ************************************ 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:23.042 ************************************ 00:07:23.042 START TEST lvs_grow_dirty 00:07:23.042 ************************************ 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:23.042 14:48:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.301 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:23.301 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:23.301 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:23.301 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:23.301 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:23.560 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:23.560 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:23.560 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u a74dbb9c-6821-4536-ae94-72cf95b21844 lvol 150 00:07:23.819 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:23.819 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:23.819 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:24.078 [2024-12-11 14:48:16.902137] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:24.078 [2024-12-11 14:48:16.902188] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:24.078 true 00:07:24.078 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:24.078 14:48:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:24.078 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:24.078 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:24.337 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:24.596 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:24.855 [2024-12-11 14:48:17.664406] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2967613 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2967613 /var/tmp/bdevperf.sock 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2967613 ']' 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:24.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.855 14:48:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:25.114 [2024-12-11 14:48:17.905154] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:25.114 [2024-12-11 14:48:17.905214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967613 ] 00:07:25.114 [2024-12-11 14:48:17.979716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.114 [2024-12-11 14:48:18.021182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.114 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.114 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:25.115 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:25.682 Nvme0n1 00:07:25.682 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:25.682 [ 00:07:25.682 { 00:07:25.682 "name": "Nvme0n1", 00:07:25.682 "aliases": [ 00:07:25.682 "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23" 00:07:25.682 ], 00:07:25.682 "product_name": "NVMe disk", 00:07:25.682 "block_size": 4096, 00:07:25.682 "num_blocks": 38912, 00:07:25.682 "uuid": "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23", 00:07:25.682 "numa_id": 1, 00:07:25.682 "assigned_rate_limits": { 00:07:25.682 "rw_ios_per_sec": 0, 00:07:25.682 "rw_mbytes_per_sec": 0, 00:07:25.682 "r_mbytes_per_sec": 0, 00:07:25.682 "w_mbytes_per_sec": 0 00:07:25.682 }, 00:07:25.682 "claimed": false, 00:07:25.682 "zoned": false, 00:07:25.682 "supported_io_types": { 00:07:25.682 "read": true, 00:07:25.682 "write": true, 00:07:25.682 "unmap": true, 00:07:25.682 "flush": true, 00:07:25.682 "reset": true, 00:07:25.682 "nvme_admin": true, 00:07:25.682 "nvme_io": true, 00:07:25.682 "nvme_io_md": false, 00:07:25.682 "write_zeroes": true, 00:07:25.682 "zcopy": false, 00:07:25.682 "get_zone_info": false, 00:07:25.682 "zone_management": false, 00:07:25.682 "zone_append": false, 00:07:25.682 "compare": true, 00:07:25.682 "compare_and_write": true, 00:07:25.682 "abort": true, 00:07:25.682 "seek_hole": false, 00:07:25.682 "seek_data": false, 00:07:25.682 "copy": true, 00:07:25.682 "nvme_iov_md": false 00:07:25.682 }, 00:07:25.682 "memory_domains": [ 00:07:25.682 { 00:07:25.682 "dma_device_id": "system", 00:07:25.682 "dma_device_type": 1 00:07:25.682 } 00:07:25.682 ], 00:07:25.682 "driver_specific": { 00:07:25.682 "nvme": [ 00:07:25.682 { 00:07:25.682 "trid": { 00:07:25.682 "trtype": "TCP", 00:07:25.682 "adrfam": "IPv4", 00:07:25.682 "traddr": "10.0.0.2", 00:07:25.682 "trsvcid": "4420", 00:07:25.682 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:25.682 }, 00:07:25.682 "ctrlr_data": { 00:07:25.682 "cntlid": 1, 00:07:25.682 "vendor_id": "0x8086", 00:07:25.682 "model_number": "SPDK bdev Controller", 00:07:25.682 "serial_number": "SPDK0", 00:07:25.682 "firmware_revision": "25.01", 00:07:25.682 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:25.682 "oacs": { 00:07:25.682 "security": 0, 00:07:25.682 "format": 0, 00:07:25.682 "firmware": 0, 00:07:25.682 "ns_manage": 0 00:07:25.682 }, 00:07:25.682 "multi_ctrlr": true, 00:07:25.682 "ana_reporting": false 00:07:25.682 }, 00:07:25.682 "vs": { 00:07:25.682 "nvme_version": "1.3" 00:07:25.682 }, 00:07:25.682 "ns_data": { 00:07:25.682 "id": 1, 00:07:25.682 "can_share": true 00:07:25.682 } 00:07:25.682 } 00:07:25.682 ], 00:07:25.682 "mp_policy": "active_passive" 00:07:25.682 } 00:07:25.682 } 00:07:25.682 ] 00:07:25.682 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2967628 00:07:25.682 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:25.682 14:48:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:25.941 Running I/O for 10 seconds... 00:07:26.878 Latency(us) 00:07:26.878 [2024-12-11T13:48:19.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:26.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.878 Nvme0n1 : 1.00 22807.00 89.09 0.00 0.00 0.00 0.00 0.00 00:07:26.878 [2024-12-11T13:48:19.926Z] =================================================================================================================== 00:07:26.878 [2024-12-11T13:48:19.926Z] Total : 22807.00 89.09 0.00 0.00 0.00 0.00 0.00 00:07:26.878 00:07:27.815 14:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:27.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.815 Nvme0n1 : 2.00 22919.50 89.53 0.00 0.00 0.00 0.00 0.00 00:07:27.815 [2024-12-11T13:48:20.863Z] =================================================================================================================== 00:07:27.815 [2024-12-11T13:48:20.863Z] Total : 22919.50 89.53 0.00 0.00 0.00 0.00 0.00 00:07:27.815 00:07:28.074 true 00:07:28.074 14:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:28.074 14:48:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:28.333 14:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:28.333 14:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:28.333 14:48:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2967628 00:07:28.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.901 Nvme0n1 : 3.00 22986.67 89.79 0.00 0.00 0.00 0.00 0.00 00:07:28.901 [2024-12-11T13:48:21.949Z] =================================================================================================================== 00:07:28.901 [2024-12-11T13:48:21.949Z] Total : 22986.67 89.79 0.00 0.00 0.00 0.00 0.00 00:07:28.901 00:07:29.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.838 Nvme0n1 : 4.00 23051.00 90.04 0.00 0.00 0.00 0.00 0.00 00:07:29.838 [2024-12-11T13:48:22.886Z] =================================================================================================================== 00:07:29.838 [2024-12-11T13:48:22.886Z] Total : 23051.00 90.04 0.00 0.00 0.00 0.00 0.00 00:07:29.838 00:07:30.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.777 Nvme0n1 : 5.00 23096.40 90.22 0.00 0.00 0.00 0.00 0.00 00:07:30.777 [2024-12-11T13:48:23.825Z] =================================================================================================================== 00:07:30.777 [2024-12-11T13:48:23.825Z] Total : 23096.40 90.22 0.00 0.00 0.00 0.00 0.00 00:07:30.777 00:07:31.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.811 Nvme0n1 : 6.00 23112.50 90.28 0.00 0.00 0.00 0.00 0.00 00:07:31.811 [2024-12-11T13:48:24.859Z] =================================================================================================================== 00:07:31.811 [2024-12-11T13:48:24.859Z] Total : 23112.50 90.28 0.00 0.00 0.00 0.00 0.00 00:07:31.811 00:07:33.187 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.187 Nvme0n1 : 7.00 23140.14 90.39 0.00 0.00 0.00 0.00 0.00 00:07:33.187 [2024-12-11T13:48:26.235Z] =================================================================================================================== 00:07:33.187 [2024-12-11T13:48:26.235Z] Total : 23140.14 90.39 0.00 0.00 0.00 0.00 0.00 00:07:33.187 00:07:34.124 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.124 Nvme0n1 : 8.00 23157.25 90.46 0.00 0.00 0.00 0.00 0.00 00:07:34.124 [2024-12-11T13:48:27.172Z] =================================================================================================================== 00:07:34.124 [2024-12-11T13:48:27.172Z] Total : 23157.25 90.46 0.00 0.00 0.00 0.00 0.00 00:07:34.124 00:07:35.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.061 Nvme0n1 : 9.00 23169.44 90.51 0.00 0.00 0.00 0.00 0.00 00:07:35.061 [2024-12-11T13:48:28.109Z] =================================================================================================================== 00:07:35.061 [2024-12-11T13:48:28.109Z] Total : 23169.44 90.51 0.00 0.00 0.00 0.00 0.00 00:07:35.061 00:07:35.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.998 Nvme0n1 : 10.00 23183.30 90.56 0.00 0.00 0.00 0.00 0.00 00:07:35.998 [2024-12-11T13:48:29.046Z] =================================================================================================================== 00:07:35.998 [2024-12-11T13:48:29.046Z] Total : 23183.30 90.56 0.00 0.00 0.00 0.00 0.00 00:07:35.998 00:07:35.998 00:07:35.998 Latency(us) 00:07:35.998 [2024-12-11T13:48:29.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.998 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.998 Nvme0n1 : 10.00 23183.10 90.56 0.00 0.00 5518.01 3248.31 11967.44 00:07:35.998 [2024-12-11T13:48:29.046Z] =================================================================================================================== 00:07:35.998 [2024-12-11T13:48:29.046Z] Total : 23183.10 90.56 0.00 0.00 5518.01 3248.31 11967.44 00:07:35.998 { 00:07:35.998 "results": [ 00:07:35.998 { 00:07:35.998 "job": "Nvme0n1", 00:07:35.998 "core_mask": "0x2", 00:07:35.998 "workload": "randwrite", 00:07:35.998 "status": "finished", 00:07:35.998 "queue_depth": 128, 00:07:35.998 "io_size": 4096, 00:07:35.998 "runtime": 10.002849, 00:07:35.998 "iops": 23183.095136195698, 00:07:35.998 "mibps": 90.55896537576444, 00:07:35.998 "io_failed": 0, 00:07:35.998 "io_timeout": 0, 00:07:35.998 "avg_latency_us": 5518.005068262128, 00:07:35.998 "min_latency_us": 3248.3060869565215, 00:07:35.998 "max_latency_us": 11967.44347826087 00:07:35.998 } 00:07:35.998 ], 00:07:35.998 "core_count": 1 00:07:35.998 } 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2967613 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2967613 ']' 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2967613 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967613 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967613' 00:07:35.998 killing process with pid 2967613 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2967613 00:07:35.998 Received shutdown signal, test time was about 10.000000 seconds 00:07:35.998 00:07:35.998 Latency(us) 00:07:35.998 [2024-12-11T13:48:29.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.998 [2024-12-11T13:48:29.046Z] =================================================================================================================== 00:07:35.998 [2024-12-11T13:48:29.046Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:35.998 14:48:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2967613 00:07:36.257 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:36.257 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.516 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:36.516 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:36.774 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2963887 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2963887 00:07:36.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2963887 Killed "${NVMF_APP[@]}" "$@" 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2969485 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2969485 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2969485 ']' 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.775 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.775 [2024-12-11 14:48:29.755604] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:36.775 [2024-12-11 14:48:29.755650] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.034 [2024-12-11 14:48:29.837263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.034 [2024-12-11 14:48:29.876525] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.034 [2024-12-11 14:48:29.876561] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.034 [2024-12-11 14:48:29.876568] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.034 [2024-12-11 14:48:29.876575] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.034 [2024-12-11 14:48:29.876579] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.034 [2024-12-11 14:48:29.877138] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.034 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.034 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:37.034 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.034 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.034 14:48:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.034 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:37.293 [2024-12-11 14:48:30.188027] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:37.293 [2024-12-11 14:48:30.188115] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:37.293 [2024-12-11 14:48:30.188142] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.293 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:37.553 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 -t 2000 00:07:37.553 [ 00:07:37.553 { 00:07:37.553 "name": "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23", 00:07:37.553 "aliases": [ 00:07:37.553 "lvs/lvol" 00:07:37.553 ], 00:07:37.553 "product_name": "Logical Volume", 00:07:37.553 "block_size": 4096, 00:07:37.553 "num_blocks": 38912, 00:07:37.553 "uuid": "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23", 00:07:37.553 "assigned_rate_limits": { 00:07:37.553 "rw_ios_per_sec": 0, 00:07:37.553 "rw_mbytes_per_sec": 0, 00:07:37.553 "r_mbytes_per_sec": 0, 00:07:37.553 "w_mbytes_per_sec": 0 00:07:37.553 }, 00:07:37.553 "claimed": false, 00:07:37.553 "zoned": false, 00:07:37.553 "supported_io_types": { 00:07:37.553 "read": true, 00:07:37.553 "write": true, 00:07:37.553 "unmap": true, 00:07:37.553 "flush": false, 00:07:37.553 "reset": true, 00:07:37.553 "nvme_admin": false, 00:07:37.553 "nvme_io": false, 00:07:37.553 "nvme_io_md": false, 00:07:37.553 "write_zeroes": true, 00:07:37.553 "zcopy": false, 00:07:37.553 "get_zone_info": false, 00:07:37.553 "zone_management": false, 00:07:37.553 "zone_append": false, 00:07:37.553 "compare": false, 00:07:37.553 "compare_and_write": false, 00:07:37.553 "abort": false, 00:07:37.553 "seek_hole": true, 00:07:37.553 "seek_data": true, 00:07:37.553 "copy": false, 00:07:37.553 "nvme_iov_md": false 00:07:37.553 }, 00:07:37.553 "driver_specific": { 00:07:37.553 "lvol": { 00:07:37.553 "lvol_store_uuid": "a74dbb9c-6821-4536-ae94-72cf95b21844", 00:07:37.553 "base_bdev": "aio_bdev", 00:07:37.553 "thin_provision": false, 00:07:37.553 "num_allocated_clusters": 38, 00:07:37.553 "snapshot": false, 00:07:37.553 "clone": false, 00:07:37.553 "esnap_clone": false 00:07:37.553 } 00:07:37.553 } 00:07:37.553 } 00:07:37.553 ] 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:37.812 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:38.070 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:38.071 14:48:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:38.330 [2024-12-11 14:48:31.160871] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:07:38.330 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:38.589 request: 00:07:38.589 { 00:07:38.589 "uuid": "a74dbb9c-6821-4536-ae94-72cf95b21844", 00:07:38.589 "method": "bdev_lvol_get_lvstores", 00:07:38.589 "req_id": 1 00:07:38.589 } 00:07:38.589 Got JSON-RPC error response 00:07:38.589 response: 00:07:38.589 { 00:07:38.589 "code": -19, 00:07:38.589 "message": "No such device" 00:07:38.589 } 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.589 aio_bdev 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.589 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.846 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 -t 2000 00:07:39.105 [ 00:07:39.105 { 00:07:39.105 "name": "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23", 00:07:39.105 "aliases": [ 00:07:39.105 "lvs/lvol" 00:07:39.105 ], 00:07:39.105 "product_name": "Logical Volume", 00:07:39.105 "block_size": 4096, 00:07:39.105 "num_blocks": 38912, 00:07:39.105 "uuid": "ecafe882-5a37-4df6-9ab8-4ea84f2d2b23", 00:07:39.105 "assigned_rate_limits": { 00:07:39.105 "rw_ios_per_sec": 0, 00:07:39.105 "rw_mbytes_per_sec": 0, 00:07:39.105 "r_mbytes_per_sec": 0, 00:07:39.105 "w_mbytes_per_sec": 0 00:07:39.105 }, 00:07:39.105 "claimed": false, 00:07:39.105 "zoned": false, 00:07:39.105 "supported_io_types": { 00:07:39.105 "read": true, 00:07:39.105 "write": true, 00:07:39.105 "unmap": true, 00:07:39.105 "flush": false, 00:07:39.105 "reset": true, 00:07:39.105 "nvme_admin": false, 00:07:39.105 "nvme_io": false, 00:07:39.105 "nvme_io_md": false, 00:07:39.105 "write_zeroes": true, 00:07:39.105 "zcopy": false, 00:07:39.105 "get_zone_info": false, 00:07:39.105 "zone_management": false, 00:07:39.105 "zone_append": false, 00:07:39.105 "compare": false, 00:07:39.105 "compare_and_write": false, 00:07:39.105 "abort": false, 00:07:39.105 "seek_hole": true, 00:07:39.105 "seek_data": true, 00:07:39.105 "copy": false, 00:07:39.105 "nvme_iov_md": false 00:07:39.105 }, 00:07:39.105 "driver_specific": { 00:07:39.105 "lvol": { 00:07:39.105 "lvol_store_uuid": "a74dbb9c-6821-4536-ae94-72cf95b21844", 00:07:39.105 "base_bdev": "aio_bdev", 00:07:39.105 "thin_provision": false, 00:07:39.105 "num_allocated_clusters": 38, 00:07:39.105 "snapshot": false, 00:07:39.105 "clone": false, 00:07:39.105 "esnap_clone": false 00:07:39.105 } 00:07:39.105 } 00:07:39.105 } 00:07:39.105 ] 00:07:39.105 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:39.105 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:39.105 14:48:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:39.364 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:39.364 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:39.364 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.364 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.364 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete ecafe882-5a37-4df6-9ab8-4ea84f2d2b23 00:07:39.622 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a74dbb9c-6821-4536-ae94-72cf95b21844 00:07:39.881 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.139 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:40.139 00:07:40.139 real 0m17.107s 00:07:40.139 user 0m43.983s 00:07:40.139 sys 0m3.869s 00:07:40.139 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.139 14:48:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.139 ************************************ 00:07:40.139 END TEST lvs_grow_dirty 00:07:40.139 ************************************ 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.139 nvmf_trace.0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.139 rmmod nvme_tcp 00:07:40.139 rmmod nvme_fabrics 00:07:40.139 rmmod nvme_keyring 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2969485 ']' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2969485 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2969485 ']' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2969485 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.139 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2969485 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2969485' 00:07:40.398 killing process with pid 2969485 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2969485 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2969485 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.398 14:48:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.931 00:07:42.931 real 0m42.090s 00:07:42.931 user 1m4.966s 00:07:42.931 sys 0m10.284s 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.931 ************************************ 00:07:42.931 END TEST nvmf_lvs_grow 00:07:42.931 ************************************ 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.931 14:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.932 ************************************ 00:07:42.932 START TEST nvmf_bdev_io_wait 00:07:42.932 ************************************ 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.932 * Looking for test storage... 00:07:42.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.932 --rc genhtml_branch_coverage=1 00:07:42.932 --rc genhtml_function_coverage=1 00:07:42.932 --rc genhtml_legend=1 00:07:42.932 --rc geninfo_all_blocks=1 00:07:42.932 --rc geninfo_unexecuted_blocks=1 00:07:42.932 00:07:42.932 ' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.932 --rc genhtml_branch_coverage=1 00:07:42.932 --rc genhtml_function_coverage=1 00:07:42.932 --rc genhtml_legend=1 00:07:42.932 --rc geninfo_all_blocks=1 00:07:42.932 --rc geninfo_unexecuted_blocks=1 00:07:42.932 00:07:42.932 ' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.932 --rc genhtml_branch_coverage=1 00:07:42.932 --rc genhtml_function_coverage=1 00:07:42.932 --rc genhtml_legend=1 00:07:42.932 --rc geninfo_all_blocks=1 00:07:42.932 --rc geninfo_unexecuted_blocks=1 00:07:42.932 00:07:42.932 ' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.932 --rc genhtml_branch_coverage=1 00:07:42.932 --rc genhtml_function_coverage=1 00:07:42.932 --rc genhtml_legend=1 00:07:42.932 --rc geninfo_all_blocks=1 00:07:42.932 --rc geninfo_unexecuted_blocks=1 00:07:42.932 00:07:42.932 ' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.932 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.933 14:48:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:49.504 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:49.504 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:49.504 Found net devices under 0000:86:00.0: cvl_0_0 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:49.504 Found net devices under 0000:86:00.1: cvl_0_1 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:49.504 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:49.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:07:49.505 00:07:49.505 --- 10.0.0.2 ping statistics --- 00:07:49.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.505 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:07:49.505 00:07:49.505 --- 10.0.0.1 ping statistics --- 00:07:49.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.505 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2973761 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2973761 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2973761 ']' 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 [2024-12-11 14:48:41.714095] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:49.505 [2024-12-11 14:48:41.714138] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.505 [2024-12-11 14:48:41.791233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.505 [2024-12-11 14:48:41.831977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.505 [2024-12-11 14:48:41.832017] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.505 [2024-12-11 14:48:41.832024] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.505 [2024-12-11 14:48:41.832030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.505 [2024-12-11 14:48:41.832036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.505 [2024-12-11 14:48:41.833492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.505 [2024-12-11 14:48:41.833604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.505 [2024-12-11 14:48:41.833688] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.505 [2024-12-11 14:48:41.833689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 [2024-12-11 14:48:41.986136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.505 14:48:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 Malloc0 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.505 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.506 [2024-12-11 14:48:42.029627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2973793 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2973795 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.506 { 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme$subsystem", 00:07:49.506 "trtype": "$TEST_TRANSPORT", 00:07:49.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "$NVMF_PORT", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.506 "hdgst": ${hdgst:-false}, 00:07:49.506 "ddgst": ${ddgst:-false} 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 } 00:07:49.506 EOF 00:07:49.506 )") 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2973797 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.506 { 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme$subsystem", 00:07:49.506 "trtype": "$TEST_TRANSPORT", 00:07:49.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "$NVMF_PORT", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.506 "hdgst": ${hdgst:-false}, 00:07:49.506 "ddgst": ${ddgst:-false} 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 } 00:07:49.506 EOF 00:07:49.506 )") 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2973800 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.506 { 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme$subsystem", 00:07:49.506 "trtype": "$TEST_TRANSPORT", 00:07:49.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "$NVMF_PORT", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.506 "hdgst": ${hdgst:-false}, 00:07:49.506 "ddgst": ${ddgst:-false} 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 } 00:07:49.506 EOF 00:07:49.506 )") 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:49.506 { 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme$subsystem", 00:07:49.506 "trtype": "$TEST_TRANSPORT", 00:07:49.506 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "$NVMF_PORT", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:49.506 "hdgst": ${hdgst:-false}, 00:07:49.506 "ddgst": ${ddgst:-false} 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 } 00:07:49.506 EOF 00:07:49.506 )") 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2973793 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme1", 00:07:49.506 "trtype": "tcp", 00:07:49.506 "traddr": "10.0.0.2", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "4420", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.506 "hdgst": false, 00:07:49.506 "ddgst": false 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 }' 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme1", 00:07:49.506 "trtype": "tcp", 00:07:49.506 "traddr": "10.0.0.2", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "4420", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.506 "hdgst": false, 00:07:49.506 "ddgst": false 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 }' 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme1", 00:07:49.506 "trtype": "tcp", 00:07:49.506 "traddr": "10.0.0.2", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "4420", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.506 "hdgst": false, 00:07:49.506 "ddgst": false 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 }' 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:49.506 14:48:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:49.506 "params": { 00:07:49.506 "name": "Nvme1", 00:07:49.506 "trtype": "tcp", 00:07:49.506 "traddr": "10.0.0.2", 00:07:49.506 "adrfam": "ipv4", 00:07:49.506 "trsvcid": "4420", 00:07:49.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:49.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:49.506 "hdgst": false, 00:07:49.506 "ddgst": false 00:07:49.506 }, 00:07:49.506 "method": "bdev_nvme_attach_controller" 00:07:49.506 }' 00:07:49.506 [2024-12-11 14:48:42.080045] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:49.506 [2024-12-11 14:48:42.080094] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:49.506 [2024-12-11 14:48:42.082597] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:49.507 [2024-12-11 14:48:42.082646] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:49.507 [2024-12-11 14:48:42.083396] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:49.507 [2024-12-11 14:48:42.083436] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:49.507 [2024-12-11 14:48:42.083719] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:07:49.507 [2024-12-11 14:48:42.083755] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:49.507 [2024-12-11 14:48:42.266800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.507 [2024-12-11 14:48:42.308419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:49.507 [2024-12-11 14:48:42.357721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.507 [2024-12-11 14:48:42.398511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:07:49.507 [2024-12-11 14:48:42.450049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.507 [2024-12-11 14:48:42.494316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.507 [2024-12-11 14:48:42.498378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:07:49.507 [2024-12-11 14:48:42.536397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:07:49.766 Running I/O for 1 seconds... 00:07:49.766 Running I/O for 1 seconds... 00:07:49.766 Running I/O for 1 seconds... 00:07:49.766 Running I/O for 1 seconds... 00:07:50.702 12103.00 IOPS, 47.28 MiB/s 00:07:50.702 Latency(us) 00:07:50.702 [2024-12-11T13:48:43.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.702 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:50.702 Nvme1n1 : 1.01 12145.43 47.44 0.00 0.00 10499.10 6097.70 15044.79 00:07:50.702 [2024-12-11T13:48:43.750Z] =================================================================================================================== 00:07:50.702 [2024-12-11T13:48:43.750Z] Total : 12145.43 47.44 0.00 0.00 10499.10 6097.70 15044.79 00:07:50.702 237440.00 IOPS, 927.50 MiB/s 00:07:50.702 Latency(us) 00:07:50.702 [2024-12-11T13:48:43.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.702 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:50.702 Nvme1n1 : 1.00 237069.16 926.05 0.00 0.00 537.04 227.06 1552.92 00:07:50.702 [2024-12-11T13:48:43.750Z] =================================================================================================================== 00:07:50.702 [2024-12-11T13:48:43.750Z] Total : 237069.16 926.05 0.00 0.00 537.04 227.06 1552.92 00:07:50.702 10232.00 IOPS, 39.97 MiB/s 00:07:50.702 Latency(us) 00:07:50.702 [2024-12-11T13:48:43.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.702 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:50.702 Nvme1n1 : 1.01 10301.50 40.24 0.00 0.00 12389.22 5157.40 20857.54 00:07:50.702 [2024-12-11T13:48:43.750Z] =================================================================================================================== 00:07:50.702 [2024-12-11T13:48:43.750Z] Total : 10301.50 40.24 0.00 0.00 12389.22 5157.40 20857.54 00:07:50.961 10964.00 IOPS, 42.83 MiB/s 00:07:50.961 Latency(us) 00:07:50.961 [2024-12-11T13:48:44.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.961 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:50.961 Nvme1n1 : 1.01 11043.83 43.14 0.00 0.00 11559.54 3561.74 21769.35 00:07:50.961 [2024-12-11T13:48:44.009Z] =================================================================================================================== 00:07:50.961 [2024-12-11T13:48:44.009Z] Total : 11043.83 43.14 0.00 0.00 11559.54 3561.74 21769.35 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2973795 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2973797 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2973800 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.961 rmmod nvme_tcp 00:07:50.961 rmmod nvme_fabrics 00:07:50.961 rmmod nvme_keyring 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2973761 ']' 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2973761 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2973761 ']' 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2973761 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.961 14:48:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2973761 00:07:50.961 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.961 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.961 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2973761' 00:07:50.961 killing process with pid 2973761 00:07:50.961 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2973761 00:07:50.961 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2973761 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.221 14:48:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:53.758 00:07:53.758 real 0m10.720s 00:07:53.758 user 0m15.942s 00:07:53.758 sys 0m6.238s 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 ************************************ 00:07:53.758 END TEST nvmf_bdev_io_wait 00:07:53.758 ************************************ 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.758 14:48:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:53.758 ************************************ 00:07:53.758 START TEST nvmf_queue_depth 00:07:53.758 ************************************ 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:53.759 * Looking for test storage... 00:07:53.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:53.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:53.759 --rc genhtml_branch_coverage=1 00:07:53.759 --rc genhtml_function_coverage=1 00:07:53.759 --rc genhtml_legend=1 00:07:53.759 --rc geninfo_all_blocks=1 00:07:53.759 --rc geninfo_unexecuted_blocks=1 00:07:53.759 00:07:53.759 ' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:53.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:53.759 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:53.760 14:48:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:00.332 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:00.332 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.332 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:00.333 Found net devices under 0000:86:00.0: cvl_0_0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:00.333 Found net devices under 0000:86:00.1: cvl_0_1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:08:00.333 00:08:00.333 --- 10.0.0.2 ping statistics --- 00:08:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.333 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:08:00.333 00:08:00.333 --- 10.0.0.1 ping statistics --- 00:08:00.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.333 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2977755 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2977755 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2977755 ']' 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 [2024-12-11 14:48:52.545920] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:00.333 [2024-12-11 14:48:52.545971] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.333 [2024-12-11 14:48:52.629269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.333 [2024-12-11 14:48:52.669309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.333 [2024-12-11 14:48:52.669344] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.333 [2024-12-11 14:48:52.669351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.333 [2024-12-11 14:48:52.669357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.333 [2024-12-11 14:48:52.669363] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.333 [2024-12-11 14:48:52.669915] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 [2024-12-11 14:48:52.806247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 Malloc0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:00.333 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.334 [2024-12-11 14:48:52.852289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2977827 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2977827 /var/tmp/bdevperf.sock 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2977827 ']' 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:00.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.334 14:48:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.334 [2024-12-11 14:48:52.903517] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:00.334 [2024-12-11 14:48:52.903559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2977827 ] 00:08:00.334 [2024-12-11 14:48:52.978829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.334 [2024-12-11 14:48:53.020434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.334 NVMe0n1 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.334 14:48:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:00.593 Running I/O for 10 seconds... 00:08:02.464 11502.00 IOPS, 44.93 MiB/s [2024-12-11T13:48:56.450Z] 11886.00 IOPS, 46.43 MiB/s [2024-12-11T13:48:57.827Z] 11950.33 IOPS, 46.68 MiB/s [2024-12-11T13:48:58.763Z] 12032.50 IOPS, 47.00 MiB/s [2024-12-11T13:48:59.700Z] 12109.00 IOPS, 47.30 MiB/s [2024-12-11T13:49:00.637Z] 12157.00 IOPS, 47.49 MiB/s [2024-12-11T13:49:01.574Z] 12159.71 IOPS, 47.50 MiB/s [2024-12-11T13:49:02.510Z] 12192.00 IOPS, 47.62 MiB/s [2024-12-11T13:49:03.447Z] 12210.78 IOPS, 47.70 MiB/s [2024-12-11T13:49:03.706Z] 12232.40 IOPS, 47.78 MiB/s 00:08:10.658 Latency(us) 00:08:10.658 [2024-12-11T13:49:03.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.658 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:10.658 Verification LBA range: start 0x0 length 0x4000 00:08:10.658 NVMe0n1 : 10.06 12249.96 47.85 0.00 0.00 83289.31 19261.89 55392.17 00:08:10.658 [2024-12-11T13:49:03.706Z] =================================================================================================================== 00:08:10.658 [2024-12-11T13:49:03.706Z] Total : 12249.96 47.85 0.00 0.00 83289.31 19261.89 55392.17 00:08:10.658 { 00:08:10.658 "results": [ 00:08:10.658 { 00:08:10.658 "job": "NVMe0n1", 00:08:10.658 "core_mask": "0x1", 00:08:10.658 "workload": "verify", 00:08:10.658 "status": "finished", 00:08:10.658 "verify_range": { 00:08:10.658 "start": 0, 00:08:10.658 "length": 16384 00:08:10.658 }, 00:08:10.658 "queue_depth": 1024, 00:08:10.658 "io_size": 4096, 00:08:10.658 "runtime": 10.061582, 00:08:10.658 "iops": 12249.962282273305, 00:08:10.658 "mibps": 47.8514151651301, 00:08:10.658 "io_failed": 0, 00:08:10.658 "io_timeout": 0, 00:08:10.658 "avg_latency_us": 83289.30805704162, 00:08:10.658 "min_latency_us": 19261.885217391304, 00:08:10.658 "max_latency_us": 55392.16695652174 00:08:10.658 } 00:08:10.658 ], 00:08:10.658 "core_count": 1 00:08:10.658 } 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2977827 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2977827 ']' 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2977827 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977827 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977827' 00:08:10.658 killing process with pid 2977827 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2977827 00:08:10.658 Received shutdown signal, test time was about 10.000000 seconds 00:08:10.658 00:08:10.658 Latency(us) 00:08:10.658 [2024-12-11T13:49:03.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.658 [2024-12-11T13:49:03.706Z] =================================================================================================================== 00:08:10.658 [2024-12-11T13:49:03.706Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:10.658 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2977827 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.918 rmmod nvme_tcp 00:08:10.918 rmmod nvme_fabrics 00:08:10.918 rmmod nvme_keyring 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2977755 ']' 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2977755 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2977755 ']' 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2977755 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2977755 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2977755' 00:08:10.918 killing process with pid 2977755 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2977755 00:08:10.918 14:49:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2977755 00:08:11.177 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.178 14:49:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:13.082 00:08:13.082 real 0m19.781s 00:08:13.082 user 0m23.121s 00:08:13.082 sys 0m6.120s 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:13.082 ************************************ 00:08:13.082 END TEST nvmf_queue_depth 00:08:13.082 ************************************ 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.082 14:49:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:13.341 ************************************ 00:08:13.341 START TEST nvmf_target_multipath 00:08:13.341 ************************************ 00:08:13.341 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:13.341 * Looking for test storage... 00:08:13.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:13.341 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:13.341 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:13.341 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:13.341 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:13.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.342 --rc genhtml_branch_coverage=1 00:08:13.342 --rc genhtml_function_coverage=1 00:08:13.342 --rc genhtml_legend=1 00:08:13.342 --rc geninfo_all_blocks=1 00:08:13.342 --rc geninfo_unexecuted_blocks=1 00:08:13.342 00:08:13.342 ' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:13.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.342 --rc genhtml_branch_coverage=1 00:08:13.342 --rc genhtml_function_coverage=1 00:08:13.342 --rc genhtml_legend=1 00:08:13.342 --rc geninfo_all_blocks=1 00:08:13.342 --rc geninfo_unexecuted_blocks=1 00:08:13.342 00:08:13.342 ' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:13.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.342 --rc genhtml_branch_coverage=1 00:08:13.342 --rc genhtml_function_coverage=1 00:08:13.342 --rc genhtml_legend=1 00:08:13.342 --rc geninfo_all_blocks=1 00:08:13.342 --rc geninfo_unexecuted_blocks=1 00:08:13.342 00:08:13.342 ' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:13.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.342 --rc genhtml_branch_coverage=1 00:08:13.342 --rc genhtml_function_coverage=1 00:08:13.342 --rc genhtml_legend=1 00:08:13.342 --rc geninfo_all_blocks=1 00:08:13.342 --rc geninfo_unexecuted_blocks=1 00:08:13.342 00:08:13.342 ' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:13.342 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:13.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:13.343 14:49:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:19.914 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:19.914 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:19.914 Found net devices under 0000:86:00.0: cvl_0_0 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:19.914 Found net devices under 0000:86:00.1: cvl_0_1 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:19.914 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:19.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:08:19.915 00:08:19.915 --- 10.0.0.2 ping statistics --- 00:08:19.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.915 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:08:19.915 00:08:19.915 --- 10.0.0.1 ping statistics --- 00:08:19.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.915 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:19.915 only one NIC for nvmf test 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:19.915 rmmod nvme_tcp 00:08:19.915 rmmod nvme_fabrics 00:08:19.915 rmmod nvme_keyring 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.915 14:49:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.822 00:08:21.822 real 0m8.396s 00:08:21.822 user 0m1.820s 00:08:21.822 sys 0m4.547s 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:21.822 ************************************ 00:08:21.822 END TEST nvmf_target_multipath 00:08:21.822 ************************************ 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.822 ************************************ 00:08:21.822 START TEST nvmf_zcopy 00:08:21.822 ************************************ 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:21.822 * Looking for test storage... 00:08:21.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:21.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.822 --rc genhtml_branch_coverage=1 00:08:21.822 --rc genhtml_function_coverage=1 00:08:21.822 --rc genhtml_legend=1 00:08:21.822 --rc geninfo_all_blocks=1 00:08:21.822 --rc geninfo_unexecuted_blocks=1 00:08:21.822 00:08:21.822 ' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:21.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.822 --rc genhtml_branch_coverage=1 00:08:21.822 --rc genhtml_function_coverage=1 00:08:21.822 --rc genhtml_legend=1 00:08:21.822 --rc geninfo_all_blocks=1 00:08:21.822 --rc geninfo_unexecuted_blocks=1 00:08:21.822 00:08:21.822 ' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:21.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.822 --rc genhtml_branch_coverage=1 00:08:21.822 --rc genhtml_function_coverage=1 00:08:21.822 --rc genhtml_legend=1 00:08:21.822 --rc geninfo_all_blocks=1 00:08:21.822 --rc geninfo_unexecuted_blocks=1 00:08:21.822 00:08:21.822 ' 00:08:21.822 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:21.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.822 --rc genhtml_branch_coverage=1 00:08:21.822 --rc genhtml_function_coverage=1 00:08:21.822 --rc genhtml_legend=1 00:08:21.822 --rc geninfo_all_blocks=1 00:08:21.822 --rc geninfo_unexecuted_blocks=1 00:08:21.822 00:08:21.822 ' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.823 14:49:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:28.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:28.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:28.394 Found net devices under 0000:86:00.0: cvl_0_0 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:28.394 Found net devices under 0000:86:00.1: cvl_0_1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.394 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:28.395 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.395 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:08:28.395 00:08:28.395 --- 10.0.0.2 ping statistics --- 00:08:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.395 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:08:28.395 00:08:28.395 --- 10.0.0.1 ping statistics --- 00:08:28.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.395 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2986732 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2986732 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2986732 ']' 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.395 14:49:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 [2024-12-11 14:49:20.907031] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:28.395 [2024-12-11 14:49:20.907074] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.395 [2024-12-11 14:49:20.984891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.395 [2024-12-11 14:49:21.025135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.395 [2024-12-11 14:49:21.025175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.395 [2024-12-11 14:49:21.025184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.395 [2024-12-11 14:49:21.025190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.395 [2024-12-11 14:49:21.025195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.395 [2024-12-11 14:49:21.025742] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 [2024-12-11 14:49:21.174582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 [2024-12-11 14:49:21.198794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 malloc0 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:28.395 { 00:08:28.395 "params": { 00:08:28.395 "name": "Nvme$subsystem", 00:08:28.395 "trtype": "$TEST_TRANSPORT", 00:08:28.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.395 "adrfam": "ipv4", 00:08:28.395 "trsvcid": "$NVMF_PORT", 00:08:28.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.395 "hdgst": ${hdgst:-false}, 00:08:28.395 "ddgst": ${ddgst:-false} 00:08:28.395 }, 00:08:28.395 "method": "bdev_nvme_attach_controller" 00:08:28.395 } 00:08:28.395 EOF 00:08:28.395 )") 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:28.395 14:49:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:28.395 "params": { 00:08:28.395 "name": "Nvme1", 00:08:28.395 "trtype": "tcp", 00:08:28.395 "traddr": "10.0.0.2", 00:08:28.395 "adrfam": "ipv4", 00:08:28.395 "trsvcid": "4420", 00:08:28.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:28.395 "hdgst": false, 00:08:28.395 "ddgst": false 00:08:28.395 }, 00:08:28.395 "method": "bdev_nvme_attach_controller" 00:08:28.395 }' 00:08:28.395 [2024-12-11 14:49:21.283215] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:28.395 [2024-12-11 14:49:21.283255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986753 ] 00:08:28.395 [2024-12-11 14:49:21.357971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.395 [2024-12-11 14:49:21.398253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.654 Running I/O for 10 seconds... 00:08:30.607 8551.00 IOPS, 66.80 MiB/s [2024-12-11T13:49:25.032Z] 8613.50 IOPS, 67.29 MiB/s [2024-12-11T13:49:25.972Z] 8650.00 IOPS, 67.58 MiB/s [2024-12-11T13:49:26.908Z] 8674.50 IOPS, 67.77 MiB/s [2024-12-11T13:49:27.845Z] 8690.60 IOPS, 67.90 MiB/s [2024-12-11T13:49:28.783Z] 8700.00 IOPS, 67.97 MiB/s [2024-12-11T13:49:29.720Z] 8700.86 IOPS, 67.98 MiB/s [2024-12-11T13:49:30.657Z] 8700.25 IOPS, 67.97 MiB/s [2024-12-11T13:49:32.034Z] 8694.78 IOPS, 67.93 MiB/s [2024-12-11T13:49:32.034Z] 8692.70 IOPS, 67.91 MiB/s 00:08:38.986 Latency(us) 00:08:38.986 [2024-12-11T13:49:32.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.986 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:38.986 Verification LBA range: start 0x0 length 0x1000 00:08:38.986 Nvme1n1 : 10.01 8695.06 67.93 0.00 0.00 14678.13 1866.35 24048.86 00:08:38.986 [2024-12-11T13:49:32.034Z] =================================================================================================================== 00:08:38.986 [2024-12-11T13:49:32.034Z] Total : 8695.06 67.93 0.00 0.00 14678.13 1866.35 24048.86 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2988590 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:38.986 { 00:08:38.986 "params": { 00:08:38.986 "name": "Nvme$subsystem", 00:08:38.986 "trtype": "$TEST_TRANSPORT", 00:08:38.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:38.986 "adrfam": "ipv4", 00:08:38.986 "trsvcid": "$NVMF_PORT", 00:08:38.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:38.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:38.986 "hdgst": ${hdgst:-false}, 00:08:38.986 "ddgst": ${ddgst:-false} 00:08:38.986 }, 00:08:38.986 "method": "bdev_nvme_attach_controller" 00:08:38.986 } 00:08:38.986 EOF 00:08:38.986 )") 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:38.986 [2024-12-11 14:49:31.799185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.986 [2024-12-11 14:49:31.799223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:38.986 14:49:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:38.986 "params": { 00:08:38.986 "name": "Nvme1", 00:08:38.986 "trtype": "tcp", 00:08:38.986 "traddr": "10.0.0.2", 00:08:38.986 "adrfam": "ipv4", 00:08:38.986 "trsvcid": "4420", 00:08:38.986 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:38.986 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:38.986 "hdgst": false, 00:08:38.986 "ddgst": false 00:08:38.986 }, 00:08:38.986 "method": "bdev_nvme_attach_controller" 00:08:38.986 }' 00:08:38.986 [2024-12-11 14:49:31.811174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.986 [2024-12-11 14:49:31.811188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.986 [2024-12-11 14:49:31.823205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.986 [2024-12-11 14:49:31.823216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.986 [2024-12-11 14:49:31.835235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.986 [2024-12-11 14:49:31.835245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.986 [2024-12-11 14:49:31.835362] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:08:38.986 [2024-12-11 14:49:31.835404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2988590 ] 00:08:38.986 [2024-12-11 14:49:31.847267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.986 [2024-12-11 14:49:31.847278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.986 [2024-12-11 14:49:31.859301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.859311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.871333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.871344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.883364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.883375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.895394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.895404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.907426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.907436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.908752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.987 [2024-12-11 14:49:31.919463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.919477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.931493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.931505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.943533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.943548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.949250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.987 [2024-12-11 14:49:31.955556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.955567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.967597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.967614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.979625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.979642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:31.991654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:31.991667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:32.003687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:32.003705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:32.015722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:32.015735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.987 [2024-12-11 14:49:32.027749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.987 [2024-12-11 14:49:32.027760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.039868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.039887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.051902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.051919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.063933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.063946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.075963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.075978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.087994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.088007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.100024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.100034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.112058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.112069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.245 [2024-12-11 14:49:32.124093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.245 [2024-12-11 14:49:32.124110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.136123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.136134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.148162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.148172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.160193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.160203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.172225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.172239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.184256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.184266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.196287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.196298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.208323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.208336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.220357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.220373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.261525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.261542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 [2024-12-11 14:49:32.272499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.272512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.246 Running I/O for 5 seconds... 00:08:39.246 [2024-12-11 14:49:32.288646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.246 [2024-12-11 14:49:32.288667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.299652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.299672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.314330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.314350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.328459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.328479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.338104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.338123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.352330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.352351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.361658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.361677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.375545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.375565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.389172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.389191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.398180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.398199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.407482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.407500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.421928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.421948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.435337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.435356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.449223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.449242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.463141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.463169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.477195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.477215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.491374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.491394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.506824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.506844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.521302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.521321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.531867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.531885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.505 [2024-12-11 14:49:32.546285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.505 [2024-12-11 14:49:32.546304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.560189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.560208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.573517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.573536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.587532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.587552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.596695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.596714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.611675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.611695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.623364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.623384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.637254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.637274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.650670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.650691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.664767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.664788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.673790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.673809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.688114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.688133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.701711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.701731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.715712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.715732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.729407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.729428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.742827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.742846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.756517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.756536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.770354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.764 [2024-12-11 14:49:32.770374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.764 [2024-12-11 14:49:32.779291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.765 [2024-12-11 14:49:32.779310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.765 [2024-12-11 14:49:32.793510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.765 [2024-12-11 14:49:32.793529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.765 [2024-12-11 14:49:32.806798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.765 [2024-12-11 14:49:32.806818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.821264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.821284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.835508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.835529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.844427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.844447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.853895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.853914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.863275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.863294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.872561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.872581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.887371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.887391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.898382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.898402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.907607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.907626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.917013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.917033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.931183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.931204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.944795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.944815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.959114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.959133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.972909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.972933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:32.986864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:32.986884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.000885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.000905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.015050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.015069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.029270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.029290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.042874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.042893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.057050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.057069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.024 [2024-12-11 14:49:33.071061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.024 [2024-12-11 14:49:33.071080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.082042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.082061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.096697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.096716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.110265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.110285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.119643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.119662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.128893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.128912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.143500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.143520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.152325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.152345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.283 [2024-12-11 14:49:33.166694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.283 [2024-12-11 14:49:33.166713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.180235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.180255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.194224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.194243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.208126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.208146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.217088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.217115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.231666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.231685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.240859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.240879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.250305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.250323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.264978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.264997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 16697.00 IOPS, 130.45 MiB/s [2024-12-11T13:49:33.332Z] [2024-12-11 14:49:33.279117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.279136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.289985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.290004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.298776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.298794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.308088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.308106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.284 [2024-12-11 14:49:33.322779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.284 [2024-12-11 14:49:33.322799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.336778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.336797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.350442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.350460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.364245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.364264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.378230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.378255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.392100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.392120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.405530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.405549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.419813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.419833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.433481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.433505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.442463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.442483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.456925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.456949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.470647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.470666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.543 [2024-12-11 14:49:33.484751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.543 [2024-12-11 14:49:33.484771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.493888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.493906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.507940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.507959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.521639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.521658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.535542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.535560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.544695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.544714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.558589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.558608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.567473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.567492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.544 [2024-12-11 14:49:33.582578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.544 [2024-12-11 14:49:33.582597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.597758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.597778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.611469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.611488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.625302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.625322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.634225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.634244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.649118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.649136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.660323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.660342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.669767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.669786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.683877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.683896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.697442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.697462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.711407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.711427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.720351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.720370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.729736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.729755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.738972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.738991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.753648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.753667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.767801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.767821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.778754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.778773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.793465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.793485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.807386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.807404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.821327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.821346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.835586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.835606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.803 [2024-12-11 14:49:33.844917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.803 [2024-12-11 14:49:33.844936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.854580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.854599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.863981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.864000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.873409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.873427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.888455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.888475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.899484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.899502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.913954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.913973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.923092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.923112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.932462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.932480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.062 [2024-12-11 14:49:33.947075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.062 [2024-12-11 14:49:33.947094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:33.960778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:33.960797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:33.974357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:33.974376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:33.988364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:33.988383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.002408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.002428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.016703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.016724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.030437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.030458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.039176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.039196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.053792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.053813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.064806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.064825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.079556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.079576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.090738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.090756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.100260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.100279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.063 [2024-12-11 14:49:34.109657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.063 [2024-12-11 14:49:34.109676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.124178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.124198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.137640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.137660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.151915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.151936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.165992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.166012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.180743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.180781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.196464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.196484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.210718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.210738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.224113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.224133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.238416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.238436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.249182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.249202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.258612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.258631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.273333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.273352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 16708.00 IOPS, 130.53 MiB/s [2024-12-11T13:49:34.370Z] [2024-12-11 14:49:34.289062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.289081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.298453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.298473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.307790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.307809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.322358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.322377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.336670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.336689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.347339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.347358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.356472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.356491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.322 [2024-12-11 14:49:34.365834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.322 [2024-12-11 14:49:34.365853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.380696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.380715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.391481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.391504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.406120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.406138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.419907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.419926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.433590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.433610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.447820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.447838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.458559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.458577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.472661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.472680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.486597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.486616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.495666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.495684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.509621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.509639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.523500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.523519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.532647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.532666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.546758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.546776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.560752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.560772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.575121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.575140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.589242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.589262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.600244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.600263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.581 [2024-12-11 14:49:34.614488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.581 [2024-12-11 14:49:34.614508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.582 [2024-12-11 14:49:34.628617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.582 [2024-12-11 14:49:34.628637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.638319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.638343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.652658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.652678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.666522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.666542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.680108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.680127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.689106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.689124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.703474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.703493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.716940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.716960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.731281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.731301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.742133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.742153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.750888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.750906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.765528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.765547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.779098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.779117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.793020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.793040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.806602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.806622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.820711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.820730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.829384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.829404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.844146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.844172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.858003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.858023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.871719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.871739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.842 [2024-12-11 14:49:34.885456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.842 [2024-12-11 14:49:34.885480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.894387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.894406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.909537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.909556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.924447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.924467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.938633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.938652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.952592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.952610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.963865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.963885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.977680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.977698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:34.991657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:34.991676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.000642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.000661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.010556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.010575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.024649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.024669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.038102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.038122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.051943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.051963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.065790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.065809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.079663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.079682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.093375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.093394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.107231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.107251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.121029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.121050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.135097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.135121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.102 [2024-12-11 14:49:35.148929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.102 [2024-12-11 14:49:35.148948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.163001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.163021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.176578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.176597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.190391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.190412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.204149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.204176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.213033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.213054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.227504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.227524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.240784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.240803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.254784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.254804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.269106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.269125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.282651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.282671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 16735.00 IOPS, 130.74 MiB/s [2024-12-11T13:49:35.409Z] [2024-12-11 14:49:35.296503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.296522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.310308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.310327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.319340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.319359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.333655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.333675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.342523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.342541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.352303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.352321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.366418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.366437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.379567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.379585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.393420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.361 [2024-12-11 14:49:35.393441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.361 [2024-12-11 14:49:35.407069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.362 [2024-12-11 14:49:35.407090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.421342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.421363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.431739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.431760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.441261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.441282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.455585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.455604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.469298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.469316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.482876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.482896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.497046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.497066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.510788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.510809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.519743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.519763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.528965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.528984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.538700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.538720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.553047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.553066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.561794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.561813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.570586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.570607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.584961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.584982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.598407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.598427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.613014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.613034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.623797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.623817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.633354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.633374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.648543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.648562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.620 [2024-12-11 14:49:35.663722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.620 [2024-12-11 14:49:35.663742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.677949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.677969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.691819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.691839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.705774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.705793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.716903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.716923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.731194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.731213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.744358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.744378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.753865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.753884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.763350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.763369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.777686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.777705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.790993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.791013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.885 [2024-12-11 14:49:35.805117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.885 [2024-12-11 14:49:35.805136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.818787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.818806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.827699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.827718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.837170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.837194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.851745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.851764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.865731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.865750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.876784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.876804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.891038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.891057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.904621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.904640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.886 [2024-12-11 14:49:35.918953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.886 [2024-12-11 14:49:35.918973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.934038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.934057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.943681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.943700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.952992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.953012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.967805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.967824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.979523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.979542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:35.994022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:35.994040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.008883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.008903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.022939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.022959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.031980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.031998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.046288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.046307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.060255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.060274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.069005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.069024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.083085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.083108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.096916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.096936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.110656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.110675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.124140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.124164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.138639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.138658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.153993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.154013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.163413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.163431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.177749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.177768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.153 [2024-12-11 14:49:36.191618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.153 [2024-12-11 14:49:36.191637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.206118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.206138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.216936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.216956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.231149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.231177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.244937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.244957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.259540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.259559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.270564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.270584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.280193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.280212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 16761.00 IOPS, 130.95 MiB/s [2024-12-11T13:49:36.461Z] [2024-12-11 14:49:36.289580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.289600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.304303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.304322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.318217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.318237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.331745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.331768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.340744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.340763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.354883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.354902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.368926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.368945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.379702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.379722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.394584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.394603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.409541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.409560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.423240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.423259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.437107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.437125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.413 [2024-12-11 14:49:36.451258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.413 [2024-12-11 14:49:36.451278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.465808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.465827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.476646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.476664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.485870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.485889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.495326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.495345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.509425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.509444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.523537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.523556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.534358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.534377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.549193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.549212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.559929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.559947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.574103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.574122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.587657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.587676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.601752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.601772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.615545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.615564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.629373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.629392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.643207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.643226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.657198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.657217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.671129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.671148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.685182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.685200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.694548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.694567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.673 [2024-12-11 14:49:36.708583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.673 [2024-12-11 14:49:36.708602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.722610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.722630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.736836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.736856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.751088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.751107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.766224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.766243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.780299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.780319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.793658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.793678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.802352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.802371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.811593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.811615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.820944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.820968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.932 [2024-12-11 14:49:36.835685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.932 [2024-12-11 14:49:36.835705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.849434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.849454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.863686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.863705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.877403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.877423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.891069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.891089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.904819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.904838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.918595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.918615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.932730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.932750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.946605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.946624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.960602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.960623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.933 [2024-12-11 14:49:36.974722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.933 [2024-12-11 14:49:36.974742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:36.989132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:36.989154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.000280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.000300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.009865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.009885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.024136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.024156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.038068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.038088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.051823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.051843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.066005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.066024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.079647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.079666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.093498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.093516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.107553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.107573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.116997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.117016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.126470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.126488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.136121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.136140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.145503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.145522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.160210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.160230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.173771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.173790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.182632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.182651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.196715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.196734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.206161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.206179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.220688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.220706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.192 [2024-12-11 14:49:37.231754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.192 [2024-12-11 14:49:37.231772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 [2024-12-11 14:49:37.246340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.246361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 [2024-12-11 14:49:37.257354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.257375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 [2024-12-11 14:49:37.267162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.267182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 [2024-12-11 14:49:37.281471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.281490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 16776.00 IOPS, 131.06 MiB/s [2024-12-11T13:49:37.499Z] [2024-12-11 14:49:37.293876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.293900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 00:08:44.451 Latency(us) 00:08:44.451 [2024-12-11T13:49:37.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.451 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:44.451 Nvme1n1 : 5.01 16776.73 131.07 0.00 0.00 7621.54 3148.58 18350.08 00:08:44.451 [2024-12-11T13:49:37.499Z] =================================================================================================================== 00:08:44.451 [2024-12-11T13:49:37.499Z] Total : 16776.73 131.07 0.00 0.00 7621.54 3148.58 18350.08 00:08:44.451 [2024-12-11 14:49:37.303351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.303368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.451 [2024-12-11 14:49:37.315381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.451 [2024-12-11 14:49:37.315396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.327424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.327442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.339451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.339469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.351490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.351507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.363516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.363533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.375549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.375567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.387580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.387595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.399622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.399639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.411648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.411659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.423683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.423698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.435717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.435733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 [2024-12-11 14:49:37.447743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:44.452 [2024-12-11 14:49:37.447754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2988590) - No such process 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2988590 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.452 delay0 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.452 14:49:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:44.711 [2024-12-11 14:49:37.598098] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:51.276 Initializing NVMe Controllers 00:08:51.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:51.276 Initialization complete. Launching workers. 00:08:51.276 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1064 00:08:51.276 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1342, failed to submit 42 00:08:51.276 success 1182, unsuccessful 160, failed 0 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.276 rmmod nvme_tcp 00:08:51.276 rmmod nvme_fabrics 00:08:51.276 rmmod nvme_keyring 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2986732 ']' 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2986732 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2986732 ']' 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2986732 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986732 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986732' 00:08:51.276 killing process with pid 2986732 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2986732 00:08:51.276 14:49:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2986732 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.276 14:49:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:53.181 00:08:53.181 real 0m31.494s 00:08:53.181 user 0m42.236s 00:08:53.181 sys 0m10.959s 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:53.181 ************************************ 00:08:53.181 END TEST nvmf_zcopy 00:08:53.181 ************************************ 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.181 ************************************ 00:08:53.181 START TEST nvmf_nmic 00:08:53.181 ************************************ 00:08:53.181 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:53.442 * Looking for test storage... 00:08:53.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.442 --rc genhtml_branch_coverage=1 00:08:53.442 --rc genhtml_function_coverage=1 00:08:53.442 --rc genhtml_legend=1 00:08:53.442 --rc geninfo_all_blocks=1 00:08:53.442 --rc geninfo_unexecuted_blocks=1 00:08:53.442 00:08:53.442 ' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.442 --rc genhtml_branch_coverage=1 00:08:53.442 --rc genhtml_function_coverage=1 00:08:53.442 --rc genhtml_legend=1 00:08:53.442 --rc geninfo_all_blocks=1 00:08:53.442 --rc geninfo_unexecuted_blocks=1 00:08:53.442 00:08:53.442 ' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.442 --rc genhtml_branch_coverage=1 00:08:53.442 --rc genhtml_function_coverage=1 00:08:53.442 --rc genhtml_legend=1 00:08:53.442 --rc geninfo_all_blocks=1 00:08:53.442 --rc geninfo_unexecuted_blocks=1 00:08:53.442 00:08:53.442 ' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:53.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.442 --rc genhtml_branch_coverage=1 00:08:53.442 --rc genhtml_function_coverage=1 00:08:53.442 --rc genhtml_legend=1 00:08:53.442 --rc geninfo_all_blocks=1 00:08:53.442 --rc geninfo_unexecuted_blocks=1 00:08:53.442 00:08:53.442 ' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.442 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:53.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:53.443 14:49:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:00.016 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:00.016 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:00.016 Found net devices under 0000:86:00.0: cvl_0_0 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:00.016 Found net devices under 0000:86:00.1: cvl_0_1 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.016 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:09:00.017 00:09:00.017 --- 10.0.0.2 ping statistics --- 00:09:00.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.017 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:09:00.017 00:09:00.017 --- 10.0.0.1 ping statistics --- 00:09:00.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.017 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2994167 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2994167 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2994167 ']' 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 [2024-12-11 14:49:52.429750] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:00.017 [2024-12-11 14:49:52.429791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.017 [2024-12-11 14:49:52.509119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.017 [2024-12-11 14:49:52.549859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.017 [2024-12-11 14:49:52.549898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.017 [2024-12-11 14:49:52.549906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.017 [2024-12-11 14:49:52.549914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.017 [2024-12-11 14:49:52.549918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.017 [2024-12-11 14:49:52.551498] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.017 [2024-12-11 14:49:52.551611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.017 [2024-12-11 14:49:52.551694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.017 [2024-12-11 14:49:52.551695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 [2024-12-11 14:49:52.697986] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 Malloc0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 [2024-12-11 14:49:52.771753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:00.017 test case1: single bdev can't be used in multiple subsystems 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.017 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.017 [2024-12-11 14:49:52.799657] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:00.017 [2024-12-11 14:49:52.799679] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:00.017 [2024-12-11 14:49:52.799687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.017 request: 00:09:00.017 { 00:09:00.017 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:00.017 "namespace": { 00:09:00.017 "bdev_name": "Malloc0", 00:09:00.018 "no_auto_visible": false, 00:09:00.018 "hide_metadata": false 00:09:00.018 }, 00:09:00.018 "method": "nvmf_subsystem_add_ns", 00:09:00.018 "req_id": 1 00:09:00.018 } 00:09:00.018 Got JSON-RPC error response 00:09:00.018 response: 00:09:00.018 { 00:09:00.018 "code": -32602, 00:09:00.018 "message": "Invalid parameters" 00:09:00.018 } 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:00.018 Adding namespace failed - expected result. 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:00.018 test case2: host connect to nvmf target in multiple paths 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 [2024-12-11 14:49:52.811799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.018 14:49:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:01.395 14:49:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:02.328 14:49:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:02.328 14:49:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:02.328 14:49:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.328 14:49:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:02.328 14:49:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:04.231 14:49:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.231 [global] 00:09:04.231 thread=1 00:09:04.231 invalidate=1 00:09:04.231 rw=write 00:09:04.231 time_based=1 00:09:04.231 runtime=1 00:09:04.231 ioengine=libaio 00:09:04.231 direct=1 00:09:04.231 bs=4096 00:09:04.231 iodepth=1 00:09:04.231 norandommap=0 00:09:04.231 numjobs=1 00:09:04.231 00:09:04.231 verify_dump=1 00:09:04.231 verify_backlog=512 00:09:04.231 verify_state_save=0 00:09:04.231 do_verify=1 00:09:04.231 verify=crc32c-intel 00:09:04.231 [job0] 00:09:04.231 filename=/dev/nvme0n1 00:09:04.231 Could not set queue depth (nvme0n1) 00:09:04.489 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.489 fio-3.35 00:09:04.489 Starting 1 thread 00:09:05.890 00:09:05.891 job0: (groupid=0, jobs=1): err= 0: pid=2995044: Wed Dec 11 14:49:58 2024 00:09:05.891 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:05.891 slat (nsec): min=6495, max=27166, avg=7392.08, stdev=986.68 00:09:05.891 clat (usec): min=151, max=394, avg=216.75, stdev=25.08 00:09:05.891 lat (usec): min=158, max=403, avg=224.15, stdev=25.12 00:09:05.891 clat percentiles (usec): 00:09:05.891 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 200], 00:09:05.891 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:09:05.891 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 262], 00:09:05.891 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 293], 99.95th=[ 293], 00:09:05.891 | 99.99th=[ 396] 00:09:05.891 write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:09:05.891 slat (usec): min=9, max=27285, avg=20.08, stdev=512.17 00:09:05.891 clat (usec): min=103, max=1717, avg=125.55, stdev=31.87 00:09:05.891 lat (usec): min=113, max=27473, avg=145.63, stdev=514.34 00:09:05.891 clat percentiles (usec): 00:09:05.891 | 1.00th=[ 109], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 118], 00:09:05.891 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 126], 00:09:05.891 | 70.00th=[ 129], 80.00th=[ 131], 90.00th=[ 137], 95.00th=[ 145], 00:09:05.891 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 249], 99.95th=[ 281], 00:09:05.891 | 99.99th=[ 1713] 00:09:05.891 bw ( KiB/s): min=12263, max=12263, per=100.00%, avg=12263.00, stdev= 0.00, samples=1 00:09:05.891 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:09:05.891 lat (usec) : 250=94.09%, 500=5.89% 00:09:05.891 lat (msec) : 2=0.02% 00:09:05.891 cpu : usr=2.90%, sys=4.80%, ctx=5402, majf=0, minf=1 00:09:05.891 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.891 issued rwts: total=2560,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.891 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.891 00:09:05.891 Run status group 0 (all jobs): 00:09:05.891 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:09:05.891 WRITE: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:09:05.891 00:09:05.891 Disk stats (read/write): 00:09:05.891 nvme0n1: ios=2336/2560, merge=0/0, ticks=1471/312, in_queue=1783, util=98.40% 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:05.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.891 rmmod nvme_tcp 00:09:05.891 rmmod nvme_fabrics 00:09:05.891 rmmod nvme_keyring 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2994167 ']' 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2994167 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2994167 ']' 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2994167 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2994167 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2994167' 00:09:05.891 killing process with pid 2994167 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2994167 00:09:05.891 14:49:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2994167 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.150 14:49:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.687 00:09:08.687 real 0m14.939s 00:09:08.687 user 0m32.763s 00:09:08.687 sys 0m5.321s 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 ************************************ 00:09:08.687 END TEST nvmf_nmic 00:09:08.687 ************************************ 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.687 ************************************ 00:09:08.687 START TEST nvmf_fio_target 00:09:08.687 ************************************ 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:08.687 * Looking for test storage... 00:09:08.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.687 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:08.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.688 --rc genhtml_branch_coverage=1 00:09:08.688 --rc genhtml_function_coverage=1 00:09:08.688 --rc genhtml_legend=1 00:09:08.688 --rc geninfo_all_blocks=1 00:09:08.688 --rc geninfo_unexecuted_blocks=1 00:09:08.688 00:09:08.688 ' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:08.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.688 --rc genhtml_branch_coverage=1 00:09:08.688 --rc genhtml_function_coverage=1 00:09:08.688 --rc genhtml_legend=1 00:09:08.688 --rc geninfo_all_blocks=1 00:09:08.688 --rc geninfo_unexecuted_blocks=1 00:09:08.688 00:09:08.688 ' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:08.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.688 --rc genhtml_branch_coverage=1 00:09:08.688 --rc genhtml_function_coverage=1 00:09:08.688 --rc genhtml_legend=1 00:09:08.688 --rc geninfo_all_blocks=1 00:09:08.688 --rc geninfo_unexecuted_blocks=1 00:09:08.688 00:09:08.688 ' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:08.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.688 --rc genhtml_branch_coverage=1 00:09:08.688 --rc genhtml_function_coverage=1 00:09:08.688 --rc genhtml_legend=1 00:09:08.688 --rc geninfo_all_blocks=1 00:09:08.688 --rc geninfo_unexecuted_blocks=1 00:09:08.688 00:09:08.688 ' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.688 14:50:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:15.259 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:15.259 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:15.259 Found net devices under 0000:86:00.0: cvl_0_0 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:15.259 Found net devices under 0000:86:00.1: cvl_0_1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:15.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:15.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:09:15.259 00:09:15.259 --- 10.0.0.2 ping statistics --- 00:09:15.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.259 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:09:15.259 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:15.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:15.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:09:15.260 00:09:15.260 --- 10.0.0.1 ping statistics --- 00:09:15.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:15.260 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2998815 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2998815 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2998815 ']' 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 [2024-12-11 14:50:07.430770] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:15.260 [2024-12-11 14:50:07.430822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.260 [2024-12-11 14:50:07.511373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.260 [2024-12-11 14:50:07.555376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.260 [2024-12-11 14:50:07.555415] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.260 [2024-12-11 14:50:07.555423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.260 [2024-12-11 14:50:07.555429] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.260 [2024-12-11 14:50:07.555435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.260 [2024-12-11 14:50:07.556913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.260 [2024-12-11 14:50:07.556950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.260 [2024-12-11 14:50:07.557056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.260 [2024-12-11 14:50:07.557057] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:15.260 [2024-12-11 14:50:07.875520] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.260 14:50:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.260 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:15.260 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.519 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:15.519 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.519 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:15.519 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:15.777 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:15.777 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:16.035 14:50:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.294 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:16.294 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.552 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:16.552 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:16.811 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:16.811 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:16.811 14:50:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:17.069 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:17.069 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.334 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:17.334 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:17.597 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.597 [2024-12-11 14:50:10.628064] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.856 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:17.856 14:50:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:18.114 14:50:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:19.491 14:50:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:21.392 14:50:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:21.392 [global] 00:09:21.392 thread=1 00:09:21.392 invalidate=1 00:09:21.392 rw=write 00:09:21.392 time_based=1 00:09:21.392 runtime=1 00:09:21.392 ioengine=libaio 00:09:21.392 direct=1 00:09:21.392 bs=4096 00:09:21.392 iodepth=1 00:09:21.392 norandommap=0 00:09:21.392 numjobs=1 00:09:21.392 00:09:21.392 verify_dump=1 00:09:21.392 verify_backlog=512 00:09:21.392 verify_state_save=0 00:09:21.392 do_verify=1 00:09:21.392 verify=crc32c-intel 00:09:21.392 [job0] 00:09:21.392 filename=/dev/nvme0n1 00:09:21.392 [job1] 00:09:21.392 filename=/dev/nvme0n2 00:09:21.392 [job2] 00:09:21.392 filename=/dev/nvme0n3 00:09:21.392 [job3] 00:09:21.392 filename=/dev/nvme0n4 00:09:21.392 Could not set queue depth (nvme0n1) 00:09:21.392 Could not set queue depth (nvme0n2) 00:09:21.392 Could not set queue depth (nvme0n3) 00:09:21.392 Could not set queue depth (nvme0n4) 00:09:21.650 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.650 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.650 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.650 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:21.650 fio-3.35 00:09:21.650 Starting 4 threads 00:09:23.024 00:09:23.024 job0: (groupid=0, jobs=1): err= 0: pid=3000286: Wed Dec 11 14:50:15 2024 00:09:23.024 read: IOPS=2070, BW=8284KiB/s (8483kB/s)(8292KiB/1001msec) 00:09:23.024 slat (nsec): min=7316, max=42257, avg=8658.89, stdev=1508.55 00:09:23.024 clat (usec): min=188, max=451, avg=240.26, stdev=23.92 00:09:23.024 lat (usec): min=196, max=459, avg=248.92, stdev=23.88 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:09:23.024 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:09:23.024 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:09:23.024 | 99.00th=[ 363], 99.50th=[ 379], 99.90th=[ 441], 99.95th=[ 449], 00:09:23.024 | 99.99th=[ 453] 00:09:23.024 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:23.024 slat (usec): min=10, max=20754, avg=20.38, stdev=409.95 00:09:23.024 clat (usec): min=121, max=302, avg=163.20, stdev=21.89 00:09:23.024 lat (usec): min=133, max=20946, avg=183.58, stdev=411.12 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:09:23.024 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:09:23.024 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 192], 95.00th=[ 204], 00:09:23.024 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 293], 99.95th=[ 302], 00:09:23.024 | 99.99th=[ 302] 00:09:23.024 bw ( KiB/s): min= 9328, max= 9328, per=30.91%, avg=9328.00, stdev= 0.00, samples=1 00:09:23.024 iops : min= 2332, max= 2332, avg=2332.00, stdev= 0.00, samples=1 00:09:23.024 lat (usec) : 250=88.13%, 500=11.87% 00:09:23.024 cpu : usr=5.20%, sys=6.40%, ctx=4636, majf=0, minf=2 00:09:23.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.024 issued rwts: total=2073,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.024 job1: (groupid=0, jobs=1): err= 0: pid=3000302: Wed Dec 11 14:50:15 2024 00:09:23.024 read: IOPS=46, BW=185KiB/s (189kB/s)(188KiB/1018msec) 00:09:23.024 slat (nsec): min=7115, max=23608, avg=14475.94, stdev=7466.93 00:09:23.024 clat (usec): min=270, max=42263, avg=19414.61, stdev=20528.56 00:09:23.024 lat (usec): min=277, max=42285, avg=19429.09, stdev=20531.32 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 343], 00:09:23.024 | 30.00th=[ 355], 40.00th=[ 404], 50.00th=[ 486], 60.00th=[40633], 00:09:23.024 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:09:23.024 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:23.024 | 99.99th=[42206] 00:09:23.024 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:09:23.024 slat (nsec): min=9216, max=41251, avg=10196.37, stdev=1689.26 00:09:23.024 clat (usec): min=122, max=373, avg=191.92, stdev=23.44 00:09:23.024 lat (usec): min=132, max=415, avg=202.12, stdev=23.94 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 145], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 172], 00:09:23.024 | 30.00th=[ 180], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:09:23.024 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 231], 00:09:23.024 | 99.00th=[ 258], 99.50th=[ 262], 99.90th=[ 375], 99.95th=[ 375], 00:09:23.024 | 99.99th=[ 375] 00:09:23.024 bw ( KiB/s): min= 4096, max= 4096, per=13.57%, avg=4096.00, stdev= 0.00, samples=1 00:09:23.024 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:23.024 lat (usec) : 250=90.34%, 500=5.55%, 750=0.18% 00:09:23.024 lat (msec) : 50=3.94% 00:09:23.024 cpu : usr=0.29%, sys=0.49%, ctx=559, majf=0, minf=2 00:09:23.024 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.024 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.024 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.024 job2: (groupid=0, jobs=1): err= 0: pid=3000321: Wed Dec 11 14:50:15 2024 00:09:23.024 read: IOPS=1812, BW=7249KiB/s (7423kB/s)(7256KiB/1001msec) 00:09:23.024 slat (nsec): min=7225, max=23892, avg=8281.50, stdev=1110.38 00:09:23.024 clat (usec): min=191, max=40928, avg=325.50, stdev=1648.81 00:09:23.024 lat (usec): min=199, max=40936, avg=333.79, stdev=1648.81 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 233], 00:09:23.024 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 251], 00:09:23.024 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 306], 95.00th=[ 363], 00:09:23.024 | 99.00th=[ 474], 99.50th=[ 515], 99.90th=[40633], 99.95th=[41157], 00:09:23.024 | 99.99th=[41157] 00:09:23.024 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:23.024 slat (nsec): min=10180, max=44332, avg=11287.56, stdev=1527.41 00:09:23.024 clat (usec): min=126, max=318, avg=175.85, stdev=27.03 00:09:23.024 lat (usec): min=137, max=329, avg=187.14, stdev=27.17 00:09:23.024 clat percentiles (usec): 00:09:23.024 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:09:23.024 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:09:23.024 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 215], 95.00th=[ 223], 00:09:23.025 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 297], 00:09:23.025 | 99.99th=[ 318] 00:09:23.025 bw ( KiB/s): min= 8192, max= 8192, per=27.15%, avg=8192.00, stdev= 0.00, samples=1 00:09:23.025 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:23.025 lat (usec) : 250=80.22%, 500=19.47%, 750=0.23% 00:09:23.025 lat (msec) : 50=0.08% 00:09:23.025 cpu : usr=2.90%, sys=6.40%, ctx=3862, majf=0, minf=1 00:09:23.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.025 issued rwts: total=1814,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.025 job3: (groupid=0, jobs=1): err= 0: pid=3000327: Wed Dec 11 14:50:15 2024 00:09:23.025 read: IOPS=2095, BW=8384KiB/s (8585kB/s)(8392KiB/1001msec) 00:09:23.025 slat (nsec): min=7520, max=35813, avg=8734.80, stdev=1386.56 00:09:23.025 clat (usec): min=187, max=386, avg=231.14, stdev=24.84 00:09:23.025 lat (usec): min=195, max=395, avg=239.88, stdev=24.96 00:09:23.025 clat percentiles (usec): 00:09:23.025 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:09:23.025 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:09:23.025 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:09:23.025 | 99.00th=[ 334], 99.50th=[ 351], 99.90th=[ 371], 99.95th=[ 375], 00:09:23.025 | 99.99th=[ 388] 00:09:23.025 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:23.025 slat (nsec): min=10843, max=38410, avg=12022.08, stdev=1506.93 00:09:23.025 clat (usec): min=124, max=890, avg=176.75, stdev=28.44 00:09:23.025 lat (usec): min=136, max=908, avg=188.77, stdev=28.59 00:09:23.025 clat percentiles (usec): 00:09:23.025 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:23.025 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:09:23.025 | 70.00th=[ 184], 80.00th=[ 198], 90.00th=[ 215], 95.00th=[ 225], 00:09:23.025 | 99.00th=[ 253], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 302], 00:09:23.025 | 99.99th=[ 889] 00:09:23.025 bw ( KiB/s): min= 9992, max= 9992, per=33.11%, avg=9992.00, stdev= 0.00, samples=1 00:09:23.025 iops : min= 2498, max= 2498, avg=2498.00, stdev= 0.00, samples=1 00:09:23.025 lat (usec) : 250=93.47%, 500=6.50%, 1000=0.02% 00:09:23.025 cpu : usr=4.40%, sys=7.20%, ctx=4660, majf=0, minf=1 00:09:23.025 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:23.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.025 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.025 issued rwts: total=2098,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.025 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:23.025 00:09:23.025 Run status group 0 (all jobs): 00:09:23.025 READ: bw=23.1MiB/s (24.3MB/s), 185KiB/s-8384KiB/s (189kB/s-8585kB/s), io=23.6MiB (24.7MB), run=1001-1018msec 00:09:23.025 WRITE: bw=29.5MiB/s (30.9MB/s), 2012KiB/s-9.99MiB/s (2060kB/s-10.5MB/s), io=30.0MiB (31.5MB), run=1001-1018msec 00:09:23.025 00:09:23.025 Disk stats (read/write): 00:09:23.025 nvme0n1: ios=1863/2048, merge=0/0, ticks=1411/299, in_queue=1710, util=97.80% 00:09:23.025 nvme0n2: ios=46/512, merge=0/0, ticks=757/91, in_queue=848, util=87.20% 00:09:23.025 nvme0n3: ios=1536/1952, merge=0/0, ticks=444/319, in_queue=763, util=88.84% 00:09:23.025 nvme0n4: ios=1919/2048, merge=0/0, ticks=1408/336, in_queue=1744, util=98.32% 00:09:23.025 14:50:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:23.025 [global] 00:09:23.025 thread=1 00:09:23.025 invalidate=1 00:09:23.025 rw=randwrite 00:09:23.025 time_based=1 00:09:23.025 runtime=1 00:09:23.025 ioengine=libaio 00:09:23.025 direct=1 00:09:23.025 bs=4096 00:09:23.025 iodepth=1 00:09:23.025 norandommap=0 00:09:23.025 numjobs=1 00:09:23.025 00:09:23.025 verify_dump=1 00:09:23.025 verify_backlog=512 00:09:23.025 verify_state_save=0 00:09:23.025 do_verify=1 00:09:23.025 verify=crc32c-intel 00:09:23.025 [job0] 00:09:23.025 filename=/dev/nvme0n1 00:09:23.025 [job1] 00:09:23.025 filename=/dev/nvme0n2 00:09:23.025 [job2] 00:09:23.025 filename=/dev/nvme0n3 00:09:23.025 [job3] 00:09:23.025 filename=/dev/nvme0n4 00:09:23.025 Could not set queue depth (nvme0n1) 00:09:23.025 Could not set queue depth (nvme0n2) 00:09:23.025 Could not set queue depth (nvme0n3) 00:09:23.025 Could not set queue depth (nvme0n4) 00:09:23.283 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.283 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.283 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.283 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:23.283 fio-3.35 00:09:23.283 Starting 4 threads 00:09:24.737 00:09:24.737 job0: (groupid=0, jobs=1): err= 0: pid=3000755: Wed Dec 11 14:50:17 2024 00:09:24.737 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4024KiB/1001msec) 00:09:24.737 slat (nsec): min=6507, max=24867, avg=7688.68, stdev=1941.45 00:09:24.737 clat (usec): min=175, max=41222, avg=776.21, stdev=4602.97 00:09:24.737 lat (usec): min=182, max=41230, avg=783.90, stdev=4604.57 00:09:24.737 clat percentiles (usec): 00:09:24.737 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:09:24.738 | 30.00th=[ 227], 40.00th=[ 237], 50.00th=[ 260], 60.00th=[ 265], 00:09:24.738 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:09:24.738 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:24.738 | 99.99th=[41157] 00:09:24.738 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:24.738 slat (nsec): min=8698, max=40395, avg=10564.01, stdev=2125.96 00:09:24.738 clat (usec): min=120, max=322, avg=190.31, stdev=44.50 00:09:24.738 lat (usec): min=130, max=333, avg=200.87, stdev=44.69 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:09:24.738 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 169], 60.00th=[ 212], 00:09:24.738 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:09:24.738 | 99.00th=[ 255], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 322], 00:09:24.738 | 99.99th=[ 322] 00:09:24.738 bw ( KiB/s): min= 4096, max= 4096, per=21.97%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.738 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.738 lat (usec) : 250=72.51%, 500=26.80%, 750=0.05% 00:09:24.738 lat (msec) : 50=0.64% 00:09:24.738 cpu : usr=1.10%, sys=1.80%, ctx=2031, majf=0, minf=1 00:09:24.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 issued rwts: total=1006,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.738 job1: (groupid=0, jobs=1): err= 0: pid=3000756: Wed Dec 11 14:50:17 2024 00:09:24.738 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:24.738 slat (nsec): min=6341, max=26434, avg=7469.67, stdev=854.00 00:09:24.738 clat (usec): min=149, max=485, avg=230.83, stdev=35.54 00:09:24.738 lat (usec): min=156, max=492, avg=238.30, stdev=35.57 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 163], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 194], 00:09:24.738 | 30.00th=[ 212], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:09:24.738 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 269], 00:09:24.738 | 99.00th=[ 347], 99.50th=[ 424], 99.90th=[ 453], 99.95th=[ 478], 00:09:24.738 | 99.99th=[ 486] 00:09:24.738 write: IOPS=2634, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec); 0 zone resets 00:09:24.738 slat (nsec): min=9009, max=43718, avg=10323.21, stdev=1299.54 00:09:24.738 clat (usec): min=100, max=273, avg=132.89, stdev=21.41 00:09:24.738 lat (usec): min=109, max=282, avg=143.21, stdev=21.55 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 106], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 116], 00:09:24.738 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 135], 00:09:24.738 | 70.00th=[ 141], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 172], 00:09:24.738 | 99.00th=[ 204], 99.50th=[ 233], 99.90th=[ 253], 99.95th=[ 269], 00:09:24.738 | 99.99th=[ 273] 00:09:24.738 bw ( KiB/s): min=12288, max=12288, per=65.90%, avg=12288.00, stdev= 0.00, samples=1 00:09:24.738 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:24.738 lat (usec) : 250=85.88%, 500=14.12% 00:09:24.738 cpu : usr=2.50%, sys=4.80%, ctx=5198, majf=0, minf=1 00:09:24.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 issued rwts: total=2560,2637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.738 job2: (groupid=0, jobs=1): err= 0: pid=3000757: Wed Dec 11 14:50:17 2024 00:09:24.738 read: IOPS=70, BW=283KiB/s (289kB/s)(284KiB/1005msec) 00:09:24.738 slat (nsec): min=7212, max=27197, avg=13097.58, stdev=6989.45 00:09:24.738 clat (usec): min=219, max=41141, avg=12651.43, stdev=18558.36 00:09:24.738 lat (usec): min=243, max=41154, avg=12664.52, stdev=18563.17 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 221], 5.00th=[ 251], 10.00th=[ 277], 20.00th=[ 289], 00:09:24.738 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 408], 00:09:24.738 | 70.00th=[11207], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:24.738 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:24.738 | 99.99th=[41157] 00:09:24.738 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:24.738 slat (nsec): min=9309, max=42273, avg=10380.79, stdev=2753.70 00:09:24.738 clat (usec): min=149, max=370, avg=193.94, stdev=24.95 00:09:24.738 lat (usec): min=158, max=413, avg=204.32, stdev=25.99 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 178], 00:09:24.738 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 192], 00:09:24.738 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 229], 95.00th=[ 245], 00:09:24.738 | 99.00th=[ 273], 99.50th=[ 322], 99.90th=[ 371], 99.95th=[ 371], 00:09:24.738 | 99.99th=[ 371] 00:09:24.738 bw ( KiB/s): min= 4096, max= 4096, per=21.97%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.738 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.738 lat (usec) : 250=86.11%, 500=9.61% 00:09:24.738 lat (msec) : 2=0.34%, 10=0.17%, 20=0.17%, 50=3.60% 00:09:24.738 cpu : usr=0.40%, sys=0.40%, ctx=583, majf=0, minf=2 00:09:24.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 issued rwts: total=71,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.738 job3: (groupid=0, jobs=1): err= 0: pid=3000758: Wed Dec 11 14:50:17 2024 00:09:24.738 read: IOPS=179, BW=719KiB/s (737kB/s)(720KiB/1001msec) 00:09:24.738 slat (nsec): min=6712, max=26193, avg=9454.75, stdev=4980.05 00:09:24.738 clat (usec): min=196, max=41506, avg=4980.38, stdev=13024.86 00:09:24.738 lat (usec): min=203, max=41514, avg=4989.83, stdev=13024.48 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 223], 00:09:24.738 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 258], 60.00th=[ 273], 00:09:24.738 | 70.00th=[ 314], 80.00th=[ 359], 90.00th=[40633], 95.00th=[41157], 00:09:24.738 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:24.738 | 99.99th=[41681] 00:09:24.738 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:24.738 slat (nsec): min=9062, max=49788, avg=10594.46, stdev=3331.93 00:09:24.738 clat (usec): min=141, max=321, avg=187.04, stdev=19.09 00:09:24.738 lat (usec): min=150, max=360, avg=197.63, stdev=19.36 00:09:24.738 clat percentiles (usec): 00:09:24.738 | 1.00th=[ 147], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 176], 00:09:24.738 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:24.738 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 217], 00:09:24.738 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 322], 99.95th=[ 322], 00:09:24.738 | 99.99th=[ 322] 00:09:24.738 bw ( KiB/s): min= 4096, max= 4096, per=21.97%, avg=4096.00, stdev= 0.00, samples=1 00:09:24.738 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:24.738 lat (usec) : 250=84.68%, 500=12.14%, 1000=0.14% 00:09:24.738 lat (msec) : 50=3.03% 00:09:24.738 cpu : usr=0.50%, sys=0.50%, ctx=692, majf=0, minf=1 00:09:24.738 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:24.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:24.738 issued rwts: total=180,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:24.738 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:24.738 00:09:24.738 Run status group 0 (all jobs): 00:09:24.738 READ: bw=14.8MiB/s (15.6MB/s), 283KiB/s-9.99MiB/s (289kB/s-10.5MB/s), io=14.9MiB (15.6MB), run=1001-1005msec 00:09:24.738 WRITE: bw=18.2MiB/s (19.1MB/s), 2038KiB/s-10.3MiB/s (2087kB/s-10.8MB/s), io=18.3MiB (19.2MB), run=1001-1005msec 00:09:24.738 00:09:24.738 Disk stats (read/write): 00:09:24.738 nvme0n1: ios=537/836, merge=0/0, ticks=1656/161, in_queue=1817, util=97.90% 00:09:24.738 nvme0n2: ios=2082/2413, merge=0/0, ticks=891/312, in_queue=1203, util=100.00% 00:09:24.738 nvme0n3: ios=65/512, merge=0/0, ticks=725/99, in_queue=824, util=88.97% 00:09:24.738 nvme0n4: ios=171/512, merge=0/0, ticks=729/90, in_queue=819, util=89.62% 00:09:24.738 14:50:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:24.738 [global] 00:09:24.738 thread=1 00:09:24.738 invalidate=1 00:09:24.738 rw=write 00:09:24.738 time_based=1 00:09:24.738 runtime=1 00:09:24.738 ioengine=libaio 00:09:24.738 direct=1 00:09:24.738 bs=4096 00:09:24.738 iodepth=128 00:09:24.738 norandommap=0 00:09:24.738 numjobs=1 00:09:24.738 00:09:24.738 verify_dump=1 00:09:24.738 verify_backlog=512 00:09:24.738 verify_state_save=0 00:09:24.738 do_verify=1 00:09:24.738 verify=crc32c-intel 00:09:24.738 [job0] 00:09:24.738 filename=/dev/nvme0n1 00:09:24.738 [job1] 00:09:24.738 filename=/dev/nvme0n2 00:09:24.738 [job2] 00:09:24.738 filename=/dev/nvme0n3 00:09:24.738 [job3] 00:09:24.738 filename=/dev/nvme0n4 00:09:24.738 Could not set queue depth (nvme0n1) 00:09:24.738 Could not set queue depth (nvme0n2) 00:09:24.738 Could not set queue depth (nvme0n3) 00:09:24.738 Could not set queue depth (nvme0n4) 00:09:25.082 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.083 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.083 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.083 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:25.083 fio-3.35 00:09:25.083 Starting 4 threads 00:09:26.019 00:09:26.019 job0: (groupid=0, jobs=1): err= 0: pid=3001133: Wed Dec 11 14:50:18 2024 00:09:26.019 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:26.019 slat (nsec): min=1048, max=14405k, avg=146301.32, stdev=887239.11 00:09:26.019 clat (usec): min=6525, max=55541, avg=18973.67, stdev=8177.68 00:09:26.019 lat (usec): min=6636, max=55549, avg=19119.97, stdev=8216.14 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[11600], 00:09:26.019 | 30.00th=[15008], 40.00th=[18220], 50.00th=[19006], 60.00th=[21103], 00:09:26.019 | 70.00th=[21365], 80.00th=[22938], 90.00th=[25822], 95.00th=[28705], 00:09:26.019 | 99.00th=[48497], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:09:26.019 | 99.99th=[55313] 00:09:26.019 write: IOPS=3928, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1004msec); 0 zone resets 00:09:26.019 slat (nsec): min=1769, max=6078.1k, avg=115670.56, stdev=590508.01 00:09:26.019 clat (usec): min=2243, max=32082, avg=14931.65, stdev=5135.42 00:09:26.019 lat (usec): min=6447, max=32089, avg=15047.32, stdev=5152.11 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 7177], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:26.019 | 30.00th=[10552], 40.00th=[12780], 50.00th=[14877], 60.00th=[16909], 00:09:26.019 | 70.00th=[17171], 80.00th=[17957], 90.00th=[21627], 95.00th=[23987], 00:09:26.019 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:09:26.019 | 99.99th=[32113] 00:09:26.019 bw ( KiB/s): min=14152, max=16351, per=21.14%, avg=15251.50, stdev=1554.93, samples=2 00:09:26.019 iops : min= 3538, max= 4087, avg=3812.50, stdev=388.20, samples=2 00:09:26.019 lat (msec) : 4=0.01%, 10=19.04%, 20=51.79%, 50=28.75%, 100=0.41% 00:09:26.019 cpu : usr=2.89%, sys=3.49%, ctx=357, majf=0, minf=1 00:09:26.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.019 issued rwts: total=3584,3944,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.019 job1: (groupid=0, jobs=1): err= 0: pid=3001134: Wed Dec 11 14:50:18 2024 00:09:26.019 read: IOPS=5652, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1004msec) 00:09:26.019 slat (nsec): min=1440, max=9338.3k, avg=83773.45, stdev=461883.39 00:09:26.019 clat (usec): min=732, max=24479, avg=10660.01, stdev=2365.52 00:09:26.019 lat (usec): min=5008, max=24492, avg=10743.78, stdev=2386.35 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 7570], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9372], 00:09:26.019 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:09:26.019 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12387], 95.00th=[16450], 00:09:26.019 | 99.00th=[21365], 99.50th=[21365], 99.90th=[24511], 99.95th=[24511], 00:09:26.019 | 99.99th=[24511] 00:09:26.019 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:09:26.019 slat (usec): min=2, max=20046, avg=80.39, stdev=488.35 00:09:26.019 clat (usec): min=6233, max=29931, avg=10812.58, stdev=3184.52 00:09:26.019 lat (usec): min=6261, max=29963, avg=10892.97, stdev=3210.68 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9634], 00:09:26.019 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10159], 00:09:26.019 | 70.00th=[10290], 80.00th=[10552], 90.00th=[12518], 95.00th=[17171], 00:09:26.019 | 99.00th=[27132], 99.50th=[27395], 99.90th=[27395], 99.95th=[29754], 00:09:26.019 | 99.99th=[30016] 00:09:26.019 bw ( KiB/s): min=23896, max=24576, per=33.60%, avg=24236.00, stdev=480.83, samples=2 00:09:26.019 iops : min= 5974, max= 6144, avg=6059.00, stdev=120.21, samples=2 00:09:26.019 lat (usec) : 750=0.01% 00:09:26.019 lat (msec) : 10=42.54%, 20=55.49%, 50=1.96% 00:09:26.019 cpu : usr=4.79%, sys=6.68%, ctx=584, majf=0, minf=1 00:09:26.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.019 issued rwts: total=5675,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.019 job2: (groupid=0, jobs=1): err= 0: pid=3001136: Wed Dec 11 14:50:18 2024 00:09:26.019 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:26.019 slat (nsec): min=1583, max=6339.3k, avg=126828.40, stdev=596737.21 00:09:26.019 clat (usec): min=8736, max=41092, avg=17308.65, stdev=7398.69 00:09:26.019 lat (usec): min=9362, max=41105, avg=17435.48, stdev=7416.79 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:09:26.019 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13304], 60.00th=[15926], 00:09:26.019 | 70.00th=[20317], 80.00th=[22152], 90.00th=[25297], 95.00th=[36963], 00:09:26.019 | 99.00th=[38536], 99.50th=[39584], 99.90th=[39584], 99.95th=[41157], 00:09:26.019 | 99.99th=[41157] 00:09:26.019 write: IOPS=4026, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1003msec); 0 zone resets 00:09:26.019 slat (usec): min=2, max=40855, avg=128.70, stdev=1091.22 00:09:26.019 clat (usec): min=2366, max=83859, avg=14642.57, stdev=8052.05 00:09:26.019 lat (usec): min=2373, max=83872, avg=14771.27, stdev=8121.45 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 5604], 5.00th=[ 9634], 10.00th=[11207], 20.00th=[11731], 00:09:26.019 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:09:26.019 | 70.00th=[13698], 80.00th=[17695], 90.00th=[19792], 95.00th=[22414], 00:09:26.019 | 99.00th=[51119], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:09:26.019 | 99.99th=[83362] 00:09:26.019 bw ( KiB/s): min=14912, max=16384, per=21.69%, avg=15648.00, stdev=1040.86, samples=2 00:09:26.019 iops : min= 3728, max= 4096, avg=3912.00, stdev=260.22, samples=2 00:09:26.019 lat (msec) : 4=0.37%, 10=3.96%, 20=75.43%, 50=19.44%, 100=0.80% 00:09:26.019 cpu : usr=3.19%, sys=5.69%, ctx=344, majf=0, minf=1 00:09:26.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.019 issued rwts: total=3584,4039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.019 job3: (groupid=0, jobs=1): err= 0: pid=3001137: Wed Dec 11 14:50:18 2024 00:09:26.019 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:09:26.019 slat (nsec): min=1613, max=5871.6k, avg=116373.11, stdev=591007.25 00:09:26.019 clat (usec): min=6280, max=31290, avg=15099.83, stdev=5407.85 00:09:26.019 lat (usec): min=6288, max=31316, avg=15216.20, stdev=5451.52 00:09:26.019 clat percentiles (usec): 00:09:26.019 | 1.00th=[ 6521], 5.00th=[ 9765], 10.00th=[10945], 20.00th=[11469], 00:09:26.019 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[13173], 00:09:26.019 | 70.00th=[15008], 80.00th=[20579], 90.00th=[25035], 95.00th=[25560], 00:09:26.019 | 99.00th=[26870], 99.50th=[28443], 99.90th=[30278], 99.95th=[31327], 00:09:26.019 | 99.99th=[31327] 00:09:26.019 write: IOPS=3966, BW=15.5MiB/s (16.2MB/s)(15.5MiB/1003msec); 0 zone resets 00:09:26.019 slat (usec): min=2, max=44127, avg=132.92, stdev=1398.14 00:09:26.019 clat (usec): min=1974, max=153324, avg=15976.57, stdev=14929.69 00:09:26.019 lat (usec): min=1984, max=153335, avg=16109.49, stdev=15090.72 00:09:26.019 clat percentiles (msec): 00:09:26.019 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 12], 00:09:26.019 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:09:26.019 | 70.00th=[ 15], 80.00th=[ 20], 90.00th=[ 22], 95.00th=[ 42], 00:09:26.019 | 99.00th=[ 87], 99.50th=[ 128], 99.90th=[ 155], 99.95th=[ 155], 00:09:26.019 | 99.99th=[ 155] 00:09:26.019 bw ( KiB/s): min=12288, max=18520, per=21.36%, avg=15404.00, stdev=4406.69, samples=2 00:09:26.019 iops : min= 3072, max= 4630, avg=3851.00, stdev=1101.67, samples=2 00:09:26.019 lat (msec) : 2=0.03%, 4=0.65%, 10=8.81%, 20=70.15%, 50=19.52% 00:09:26.019 lat (msec) : 100=0.42%, 250=0.42% 00:09:26.019 cpu : usr=2.59%, sys=3.39%, ctx=505, majf=0, minf=1 00:09:26.019 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:26.019 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:26.019 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:26.019 issued rwts: total=3584,3978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:26.019 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:26.019 00:09:26.019 Run status group 0 (all jobs): 00:09:26.019 READ: bw=63.9MiB/s (67.0MB/s), 13.9MiB/s-22.1MiB/s (14.6MB/s-23.2MB/s), io=64.2MiB (67.3MB), run=1003-1004msec 00:09:26.019 WRITE: bw=70.4MiB/s (73.9MB/s), 15.3MiB/s-23.9MiB/s (16.1MB/s-25.1MB/s), io=70.7MiB (74.2MB), run=1003-1004msec 00:09:26.019 00:09:26.019 Disk stats (read/write): 00:09:26.019 nvme0n1: ios=3038/3072, merge=0/0, ticks=16055/11748, in_queue=27803, util=86.27% 00:09:26.019 nvme0n2: ios=4656/4860, merge=0/0, ticks=16064/15875, in_queue=31939, util=97.94% 00:09:26.019 nvme0n3: ios=3106/3461, merge=0/0, ticks=11772/10410, in_queue=22182, util=99.67% 00:09:26.019 nvme0n4: ios=2595/2863, merge=0/0, ticks=15093/16020, in_queue=31113, util=98.45% 00:09:26.019 14:50:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:26.019 [global] 00:09:26.019 thread=1 00:09:26.019 invalidate=1 00:09:26.019 rw=randwrite 00:09:26.019 time_based=1 00:09:26.019 runtime=1 00:09:26.019 ioengine=libaio 00:09:26.019 direct=1 00:09:26.019 bs=4096 00:09:26.019 iodepth=128 00:09:26.019 norandommap=0 00:09:26.019 numjobs=1 00:09:26.019 00:09:26.019 verify_dump=1 00:09:26.019 verify_backlog=512 00:09:26.019 verify_state_save=0 00:09:26.019 do_verify=1 00:09:26.019 verify=crc32c-intel 00:09:26.019 [job0] 00:09:26.019 filename=/dev/nvme0n1 00:09:26.019 [job1] 00:09:26.019 filename=/dev/nvme0n2 00:09:26.019 [job2] 00:09:26.019 filename=/dev/nvme0n3 00:09:26.019 [job3] 00:09:26.019 filename=/dev/nvme0n4 00:09:26.276 Could not set queue depth (nvme0n1) 00:09:26.276 Could not set queue depth (nvme0n2) 00:09:26.276 Could not set queue depth (nvme0n3) 00:09:26.276 Could not set queue depth (nvme0n4) 00:09:26.534 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.534 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.534 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.534 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:26.534 fio-3.35 00:09:26.534 Starting 4 threads 00:09:27.912 00:09:27.912 job0: (groupid=0, jobs=1): err= 0: pid=3001515: Wed Dec 11 14:50:20 2024 00:09:27.912 read: IOPS=5912, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1004msec) 00:09:27.912 slat (nsec): min=1053, max=20302k, avg=77377.33, stdev=632327.78 00:09:27.912 clat (usec): min=1306, max=48488, avg=11275.04, stdev=5490.94 00:09:27.912 lat (usec): min=3129, max=48491, avg=11352.42, stdev=5531.77 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 5342], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8029], 00:09:27.912 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10159], 00:09:27.912 | 70.00th=[11076], 80.00th=[13304], 90.00th=[17171], 95.00th=[22152], 00:09:27.912 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39060], 99.95th=[45351], 00:09:27.912 | 99.99th=[48497] 00:09:27.912 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:09:27.912 slat (usec): min=2, max=12892, avg=58.64, stdev=487.59 00:09:27.912 clat (usec): min=1195, max=40035, avg=9830.37, stdev=5387.91 00:09:27.912 lat (usec): min=1205, max=40038, avg=9889.01, stdev=5409.09 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 2704], 5.00th=[ 4080], 10.00th=[ 4817], 20.00th=[ 6587], 00:09:27.912 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 9241], 00:09:27.912 | 70.00th=[10028], 80.00th=[11731], 90.00th=[15401], 95.00th=[22414], 00:09:27.912 | 99.00th=[30802], 99.50th=[32113], 99.90th=[39060], 99.95th=[40109], 00:09:27.912 | 99.99th=[40109] 00:09:27.912 bw ( KiB/s): min=20480, max=28672, per=36.27%, avg=24576.00, stdev=5792.62, samples=2 00:09:27.912 iops : min= 5120, max= 7168, avg=6144.00, stdev=1448.15, samples=2 00:09:27.912 lat (msec) : 2=0.20%, 4=2.38%, 10=59.96%, 20=31.20%, 50=6.26% 00:09:27.912 cpu : usr=3.89%, sys=6.68%, ctx=414, majf=0, minf=1 00:09:27.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:27.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.912 issued rwts: total=5936,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.912 job1: (groupid=0, jobs=1): err= 0: pid=3001516: Wed Dec 11 14:50:20 2024 00:09:27.912 read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec) 00:09:27.912 slat (nsec): min=1663, max=26632k, avg=149864.90, stdev=1274230.72 00:09:27.912 clat (usec): min=3546, max=69226, avg=20866.54, stdev=9803.99 00:09:27.912 lat (usec): min=3549, max=69264, avg=21016.41, stdev=9904.27 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 6849], 5.00th=[ 9241], 10.00th=[11207], 20.00th=[14222], 00:09:27.912 | 30.00th=[15270], 40.00th=[15795], 50.00th=[17957], 60.00th=[19792], 00:09:27.912 | 70.00th=[23725], 80.00th=[27132], 90.00th=[33817], 95.00th=[42730], 00:09:27.912 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[57410], 00:09:27.912 | 99.99th=[69731] 00:09:27.912 write: IOPS=3206, BW=12.5MiB/s (13.1MB/s)(12.6MiB/1007msec); 0 zone resets 00:09:27.912 slat (usec): min=2, max=23858, avg=130.31, stdev=1002.37 00:09:27.912 clat (usec): min=1970, max=61391, avg=19521.85, stdev=9543.33 00:09:27.912 lat (usec): min=1975, max=61422, avg=19652.16, stdev=9627.45 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 4080], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[12256], 00:09:27.912 | 30.00th=[13173], 40.00th=[14091], 50.00th=[19006], 60.00th=[21365], 00:09:27.912 | 70.00th=[22152], 80.00th=[23987], 90.00th=[29492], 95.00th=[37487], 00:09:27.912 | 99.00th=[53216], 99.50th=[54789], 99.90th=[55313], 99.95th=[58459], 00:09:27.912 | 99.99th=[61604] 00:09:27.912 bw ( KiB/s): min=12288, max=12528, per=18.31%, avg=12408.00, stdev=169.71, samples=2 00:09:27.912 iops : min= 3072, max= 3132, avg=3102.00, stdev=42.43, samples=2 00:09:27.912 lat (msec) : 2=0.11%, 4=0.44%, 10=8.22%, 20=49.71%, 50=39.71% 00:09:27.912 lat (msec) : 100=1.81% 00:09:27.912 cpu : usr=3.28%, sys=3.68%, ctx=268, majf=0, minf=1 00:09:27.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:27.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.912 issued rwts: total=3072,3229,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.912 job2: (groupid=0, jobs=1): err= 0: pid=3001517: Wed Dec 11 14:50:20 2024 00:09:27.912 read: IOPS=4482, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1004msec) 00:09:27.912 slat (nsec): min=1214, max=16277k, avg=105947.05, stdev=925312.49 00:09:27.912 clat (usec): min=1414, max=42743, avg=14687.15, stdev=5674.73 00:09:27.912 lat (usec): min=1430, max=42758, avg=14793.10, stdev=5756.76 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 3687], 5.00th=[ 5276], 10.00th=[ 8979], 20.00th=[10945], 00:09:27.912 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13698], 60.00th=[15139], 00:09:27.912 | 70.00th=[15926], 80.00th=[19006], 90.00th=[21890], 95.00th=[26608], 00:09:27.912 | 99.00th=[32375], 99.50th=[32637], 99.90th=[42206], 99.95th=[42730], 00:09:27.912 | 99.99th=[42730] 00:09:27.912 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:27.912 slat (nsec): min=1903, max=20830k, avg=83222.40, stdev=706820.29 00:09:27.912 clat (usec): min=326, max=42879, avg=13316.29, stdev=6748.77 00:09:27.912 lat (usec): min=447, max=42907, avg=13399.51, stdev=6820.70 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 832], 5.00th=[ 3261], 10.00th=[ 4883], 20.00th=[ 8160], 00:09:27.912 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11731], 60.00th=[13173], 00:09:27.912 | 70.00th=[15664], 80.00th=[20841], 90.00th=[22938], 95.00th=[25560], 00:09:27.912 | 99.00th=[29492], 99.50th=[30540], 99.90th=[31851], 99.95th=[38536], 00:09:27.912 | 99.99th=[42730] 00:09:27.912 bw ( KiB/s): min=16384, max=20480, per=27.20%, avg=18432.00, stdev=2896.31, samples=2 00:09:27.912 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:27.912 lat (usec) : 500=0.05%, 750=0.05%, 1000=0.96% 00:09:27.912 lat (msec) : 2=0.86%, 4=2.29%, 10=18.29%, 20=60.21%, 50=17.28% 00:09:27.912 cpu : usr=3.19%, sys=5.28%, ctx=357, majf=0, minf=1 00:09:27.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:27.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.912 issued rwts: total=4500,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.912 job3: (groupid=0, jobs=1): err= 0: pid=3001518: Wed Dec 11 14:50:20 2024 00:09:27.912 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:27.912 slat (nsec): min=1325, max=22193k, avg=148454.28, stdev=1272476.66 00:09:27.912 clat (usec): min=2562, max=64892, avg=19486.21, stdev=10893.60 00:09:27.912 lat (usec): min=2573, max=64915, avg=19634.66, stdev=11017.28 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 3752], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10945], 00:09:27.912 | 30.00th=[11731], 40.00th=[13173], 50.00th=[16450], 60.00th=[18220], 00:09:27.912 | 70.00th=[22152], 80.00th=[27395], 90.00th=[39584], 95.00th=[42730], 00:09:27.912 | 99.00th=[44303], 99.50th=[44303], 99.90th=[56361], 99.95th=[64750], 00:09:27.912 | 99.99th=[64750] 00:09:27.912 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:27.912 slat (nsec): min=1998, max=23323k, avg=150942.05, stdev=1250878.67 00:09:27.912 clat (usec): min=496, max=87464, avg=20516.03, stdev=17088.92 00:09:27.912 lat (usec): min=522, max=87468, avg=20666.97, stdev=17190.50 00:09:27.912 clat percentiles (usec): 00:09:27.912 | 1.00th=[ 865], 5.00th=[ 2278], 10.00th=[ 4080], 20.00th=[ 9110], 00:09:27.912 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13304], 60.00th=[14877], 00:09:27.912 | 70.00th=[20841], 80.00th=[35914], 90.00th=[48497], 95.00th=[53740], 00:09:27.912 | 99.00th=[76022], 99.50th=[76022], 99.90th=[87557], 99.95th=[87557], 00:09:27.912 | 99.99th=[87557] 00:09:27.912 bw ( KiB/s): min=11920, max=12656, per=18.13%, avg=12288.00, stdev=520.43, samples=2 00:09:27.912 iops : min= 2980, max= 3164, avg=3072.00, stdev=130.11, samples=2 00:09:27.912 lat (usec) : 500=0.02%, 750=0.08%, 1000=1.95% 00:09:27.912 lat (msec) : 2=0.44%, 4=3.06%, 10=11.37%, 20=50.36%, 50=28.34% 00:09:27.912 lat (msec) : 100=4.39% 00:09:27.912 cpu : usr=2.79%, sys=4.09%, ctx=289, majf=0, minf=1 00:09:27.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:27.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.912 issued rwts: total=3072,3078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.912 00:09:27.912 Run status group 0 (all jobs): 00:09:27.912 READ: bw=64.3MiB/s (67.4MB/s), 11.9MiB/s-23.1MiB/s (12.5MB/s-24.2MB/s), io=64.8MiB (67.9MB), run=1003-1007msec 00:09:27.912 WRITE: bw=66.2MiB/s (69.4MB/s), 12.0MiB/s-23.9MiB/s (12.6MB/s-25.1MB/s), io=66.6MiB (69.9MB), run=1003-1007msec 00:09:27.912 00:09:27.912 Disk stats (read/write): 00:09:27.913 nvme0n1: ios=5104/5120, merge=0/0, ticks=46236/40908, in_queue=87144, util=97.69% 00:09:27.913 nvme0n2: ios=2281/2560, merge=0/0, ticks=39056/39228, in_queue=78284, util=84.47% 00:09:27.913 nvme0n3: ios=3584/3719, merge=0/0, ticks=51018/47802, in_queue=98820, util=87.65% 00:09:27.913 nvme0n4: ios=2066/2469, merge=0/0, ticks=34455/28333, in_queue=62788, util=99.01% 00:09:27.913 14:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:27.913 14:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3001750 00:09:27.913 14:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:27.913 14:50:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:27.913 [global] 00:09:27.913 thread=1 00:09:27.913 invalidate=1 00:09:27.913 rw=read 00:09:27.913 time_based=1 00:09:27.913 runtime=10 00:09:27.913 ioengine=libaio 00:09:27.913 direct=1 00:09:27.913 bs=4096 00:09:27.913 iodepth=1 00:09:27.913 norandommap=1 00:09:27.913 numjobs=1 00:09:27.913 00:09:27.913 [job0] 00:09:27.913 filename=/dev/nvme0n1 00:09:27.913 [job1] 00:09:27.913 filename=/dev/nvme0n2 00:09:27.913 [job2] 00:09:27.913 filename=/dev/nvme0n3 00:09:27.913 [job3] 00:09:27.913 filename=/dev/nvme0n4 00:09:27.913 Could not set queue depth (nvme0n1) 00:09:27.913 Could not set queue depth (nvme0n2) 00:09:27.913 Could not set queue depth (nvme0n3) 00:09:27.913 Could not set queue depth (nvme0n4) 00:09:27.913 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.913 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.913 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.913 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:27.913 fio-3.35 00:09:27.913 Starting 4 threads 00:09:31.202 14:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:31.202 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34783232, buflen=4096 00:09:31.202 fio: pid=3001890, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.202 14:50:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:31.202 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.202 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=31293440, buflen=4096 00:09:31.202 fio: pid=3001889, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.202 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:31.202 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=3936256, buflen=4096 00:09:31.202 fio: pid=3001886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.202 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.202 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:31.461 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=344064, buflen=4096 00:09:31.461 fio: pid=3001887, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:31.461 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.461 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:31.461 00:09:31.461 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3001886: Wed Dec 11 14:50:24 2024 00:09:31.461 read: IOPS=307, BW=1229KiB/s (1258kB/s)(3844KiB/3129msec) 00:09:31.461 slat (usec): min=6, max=18914, avg=41.17, stdev=688.95 00:09:31.461 clat (usec): min=152, max=52035, avg=3190.35, stdev=10648.90 00:09:31.461 lat (usec): min=160, max=52058, avg=3231.54, stdev=10675.03 00:09:31.461 clat percentiles (usec): 00:09:31.461 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:09:31.461 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:09:31.461 | 70.00th=[ 217], 80.00th=[ 237], 90.00th=[ 355], 95.00th=[41157], 00:09:31.461 | 99.00th=[41681], 99.50th=[42206], 99.90th=[52167], 99.95th=[52167], 00:09:31.461 | 99.99th=[52167] 00:09:31.461 bw ( KiB/s): min= 128, max= 5562, per=5.32%, avg=1096.33, stdev=2188.08, samples=6 00:09:31.461 iops : min= 32, max= 1390, avg=274.00, stdev=546.81, samples=6 00:09:31.461 lat (usec) : 250=82.33%, 500=9.88%, 750=0.31% 00:09:31.461 lat (msec) : 2=0.10%, 20=0.10%, 50=7.07%, 100=0.10% 00:09:31.461 cpu : usr=0.10%, sys=0.32%, ctx=968, majf=0, minf=1 00:09:31.461 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.461 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.461 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.461 issued rwts: total=962,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.461 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.461 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3001887: Wed Dec 11 14:50:24 2024 00:09:31.461 read: IOPS=25, BW=101KiB/s (103kB/s)(336KiB/3332msec) 00:09:31.461 slat (usec): min=9, max=12771, avg=317.73, stdev=1674.88 00:09:31.461 clat (usec): min=249, max=44083, avg=39219.09, stdev=8764.13 00:09:31.461 lat (usec): min=263, max=53960, avg=39540.28, stdev=8991.23 00:09:31.461 clat percentiles (usec): 00:09:31.461 | 1.00th=[ 249], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:31.461 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:31.461 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:09:31.461 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:09:31.462 | 99.99th=[44303] 00:09:31.462 bw ( KiB/s): min= 96, max= 112, per=0.48%, avg=100.33, stdev= 6.98, samples=6 00:09:31.462 iops : min= 24, max= 28, avg=25.00, stdev= 1.67, samples=6 00:09:31.462 lat (usec) : 250=1.18%, 500=3.53% 00:09:31.462 lat (msec) : 50=94.12% 00:09:31.462 cpu : usr=0.00%, sys=0.12%, ctx=88, majf=0, minf=2 00:09:31.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 issued rwts: total=85,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.462 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3001889: Wed Dec 11 14:50:24 2024 00:09:31.462 read: IOPS=2607, BW=10.2MiB/s (10.7MB/s)(29.8MiB/2930msec) 00:09:31.462 slat (usec): min=6, max=9761, avg= 8.61, stdev=111.59 00:09:31.462 clat (usec): min=161, max=42391, avg=370.92, stdev=2561.74 00:09:31.462 lat (usec): min=168, max=51958, avg=379.53, stdev=2585.43 00:09:31.462 clat percentiles (usec): 00:09:31.462 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:31.462 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:09:31.462 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 239], 95.00th=[ 253], 00:09:31.462 | 99.00th=[ 269], 99.50th=[ 347], 99.90th=[41157], 99.95th=[42206], 00:09:31.462 | 99.99th=[42206] 00:09:31.462 bw ( KiB/s): min= 152, max=19016, per=59.04%, avg=12174.40, stdev=7899.72, samples=5 00:09:31.462 iops : min= 38, max= 4754, avg=3043.60, stdev=1974.93, samples=5 00:09:31.462 lat (usec) : 250=94.05%, 500=5.52%, 750=0.01% 00:09:31.462 lat (msec) : 4=0.01%, 50=0.39% 00:09:31.462 cpu : usr=0.82%, sys=2.22%, ctx=7642, majf=0, minf=2 00:09:31.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 issued rwts: total=7641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.462 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3001890: Wed Dec 11 14:50:24 2024 00:09:31.462 read: IOPS=3120, BW=12.2MiB/s (12.8MB/s)(33.2MiB/2722msec) 00:09:31.462 slat (nsec): min=6348, max=33670, avg=7453.90, stdev=1237.70 00:09:31.462 clat (usec): min=155, max=41201, avg=308.92, stdev=1984.12 00:09:31.462 lat (usec): min=162, max=41224, avg=316.37, stdev=1984.82 00:09:31.462 clat percentiles (usec): 00:09:31.462 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:09:31.462 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:09:31.462 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 233], 95.00th=[ 243], 00:09:31.462 | 99.00th=[ 302], 99.50th=[ 400], 99.90th=[41157], 99.95th=[41157], 00:09:31.462 | 99.99th=[41157] 00:09:31.462 bw ( KiB/s): min= 104, max=18408, per=58.47%, avg=12056.00, stdev=8640.32, samples=5 00:09:31.462 iops : min= 26, max= 4602, avg=3014.00, stdev=2160.08, samples=5 00:09:31.462 lat (usec) : 250=96.69%, 500=2.98%, 750=0.04% 00:09:31.462 lat (msec) : 2=0.02%, 10=0.01%, 20=0.01%, 50=0.24% 00:09:31.462 cpu : usr=0.81%, sys=2.87%, ctx=8493, majf=0, minf=2 00:09:31.462 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:31.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:31.462 issued rwts: total=8493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:31.462 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:31.462 00:09:31.462 Run status group 0 (all jobs): 00:09:31.462 READ: bw=20.1MiB/s (21.1MB/s), 101KiB/s-12.2MiB/s (103kB/s-12.8MB/s), io=67.1MiB (70.4MB), run=2722-3332msec 00:09:31.462 00:09:31.462 Disk stats (read/write): 00:09:31.462 nvme0n1: ios=999/0, merge=0/0, ticks=4005/0, in_queue=4005, util=98.37% 00:09:31.462 nvme0n2: ios=78/0, merge=0/0, ticks=3049/0, in_queue=3049, util=95.54% 00:09:31.462 nvme0n3: ios=7635/0, merge=0/0, ticks=2716/0, in_queue=2716, util=96.18% 00:09:31.462 nvme0n4: ios=8032/0, merge=0/0, ticks=2475/0, in_queue=2475, util=96.44% 00:09:31.721 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.721 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:31.980 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:31.980 14:50:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:32.239 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.239 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:32.239 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:32.239 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:32.497 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:32.497 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3001750 00:09:32.497 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:32.497 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:32.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:32.756 nvmf hotplug test: fio failed as expected 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:32.756 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.016 rmmod nvme_tcp 00:09:33.016 rmmod nvme_fabrics 00:09:33.016 rmmod nvme_keyring 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2998815 ']' 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2998815 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2998815 ']' 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2998815 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2998815 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2998815' 00:09:33.016 killing process with pid 2998815 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2998815 00:09:33.016 14:50:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2998815 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.276 14:50:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:35.182 00:09:35.182 real 0m26.980s 00:09:35.182 user 1m46.198s 00:09:35.182 sys 0m8.758s 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.182 ************************************ 00:09:35.182 END TEST nvmf_fio_target 00:09:35.182 ************************************ 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.182 14:50:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:35.442 ************************************ 00:09:35.442 START TEST nvmf_bdevio 00:09:35.442 ************************************ 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:35.442 * Looking for test storage... 00:09:35.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:35.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.442 --rc genhtml_branch_coverage=1 00:09:35.442 --rc genhtml_function_coverage=1 00:09:35.442 --rc genhtml_legend=1 00:09:35.442 --rc geninfo_all_blocks=1 00:09:35.442 --rc geninfo_unexecuted_blocks=1 00:09:35.442 00:09:35.442 ' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:35.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.442 --rc genhtml_branch_coverage=1 00:09:35.442 --rc genhtml_function_coverage=1 00:09:35.442 --rc genhtml_legend=1 00:09:35.442 --rc geninfo_all_blocks=1 00:09:35.442 --rc geninfo_unexecuted_blocks=1 00:09:35.442 00:09:35.442 ' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:35.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.442 --rc genhtml_branch_coverage=1 00:09:35.442 --rc genhtml_function_coverage=1 00:09:35.442 --rc genhtml_legend=1 00:09:35.442 --rc geninfo_all_blocks=1 00:09:35.442 --rc geninfo_unexecuted_blocks=1 00:09:35.442 00:09:35.442 ' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:35.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:35.442 --rc genhtml_branch_coverage=1 00:09:35.442 --rc genhtml_function_coverage=1 00:09:35.442 --rc genhtml_legend=1 00:09:35.442 --rc geninfo_all_blocks=1 00:09:35.442 --rc geninfo_unexecuted_blocks=1 00:09:35.442 00:09:35.442 ' 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.442 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:35.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:35.443 14:50:28 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:42.020 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:42.020 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:42.020 Found net devices under 0000:86:00.0: cvl_0_0 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:42.020 Found net devices under 0000:86:00.1: cvl_0_1 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:42.020 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:42.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:09:42.021 00:09:42.021 --- 10.0.0.2 ping statistics --- 00:09:42.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.021 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:42.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:09:42.021 00:09:42.021 --- 10.0.0.1 ping statistics --- 00:09:42.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.021 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3006290 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3006290 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3006290 ']' 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 [2024-12-11 14:50:34.477610] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:42.021 [2024-12-11 14:50:34.477662] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:42.021 [2024-12-11 14:50:34.559126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:42.021 [2024-12-11 14:50:34.599437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:42.021 [2024-12-11 14:50:34.599478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:42.021 [2024-12-11 14:50:34.599486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:42.021 [2024-12-11 14:50:34.599491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:42.021 [2024-12-11 14:50:34.599496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:42.021 [2024-12-11 14:50:34.601056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:42.021 [2024-12-11 14:50:34.601184] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:42.021 [2024-12-11 14:50:34.601274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:42.021 [2024-12-11 14:50:34.601274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 [2024-12-11 14:50:34.750524] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 Malloc0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:42.021 [2024-12-11 14:50:34.821203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.021 { 00:09:42.021 "params": { 00:09:42.021 "name": "Nvme$subsystem", 00:09:42.021 "trtype": "$TEST_TRANSPORT", 00:09:42.021 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.021 "adrfam": "ipv4", 00:09:42.021 "trsvcid": "$NVMF_PORT", 00:09:42.021 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.021 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.021 "hdgst": ${hdgst:-false}, 00:09:42.021 "ddgst": ${ddgst:-false} 00:09:42.021 }, 00:09:42.021 "method": "bdev_nvme_attach_controller" 00:09:42.021 } 00:09:42.021 EOF 00:09:42.021 )") 00:09:42.021 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:42.022 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:42.022 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:42.022 14:50:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.022 "params": { 00:09:42.022 "name": "Nvme1", 00:09:42.022 "trtype": "tcp", 00:09:42.022 "traddr": "10.0.0.2", 00:09:42.022 "adrfam": "ipv4", 00:09:42.022 "trsvcid": "4420", 00:09:42.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.022 "hdgst": false, 00:09:42.022 "ddgst": false 00:09:42.022 }, 00:09:42.022 "method": "bdev_nvme_attach_controller" 00:09:42.022 }' 00:09:42.022 [2024-12-11 14:50:34.874979] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:09:42.022 [2024-12-11 14:50:34.875022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3006376 ] 00:09:42.022 [2024-12-11 14:50:34.952950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:42.022 [2024-12-11 14:50:34.995794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.022 [2024-12-11 14:50:34.995901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.022 [2024-12-11 14:50:34.995901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:42.280 I/O targets: 00:09:42.280 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:42.280 00:09:42.280 00:09:42.280 CUnit - A unit testing framework for C - Version 2.1-3 00:09:42.280 http://cunit.sourceforge.net/ 00:09:42.280 00:09:42.280 00:09:42.280 Suite: bdevio tests on: Nvme1n1 00:09:42.280 Test: blockdev write read block ...passed 00:09:42.538 Test: blockdev write zeroes read block ...passed 00:09:42.538 Test: blockdev write zeroes read no split ...passed 00:09:42.538 Test: blockdev write zeroes read split ...passed 00:09:42.538 Test: blockdev write zeroes read split partial ...passed 00:09:42.538 Test: blockdev reset ...[2024-12-11 14:50:35.426986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:42.538 [2024-12-11 14:50:35.427050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1415050 (9): Bad file descriptor 00:09:42.538 [2024-12-11 14:50:35.439146] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:42.538 passed 00:09:42.538 Test: blockdev write read 8 blocks ...passed 00:09:42.538 Test: blockdev write read size > 128k ...passed 00:09:42.538 Test: blockdev write read invalid size ...passed 00:09:42.538 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:42.538 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:42.538 Test: blockdev write read max offset ...passed 00:09:42.797 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:42.797 Test: blockdev writev readv 8 blocks ...passed 00:09:42.797 Test: blockdev writev readv 30 x 1block ...passed 00:09:42.797 Test: blockdev writev readv block ...passed 00:09:42.797 Test: blockdev writev readv size > 128k ...passed 00:09:42.797 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:42.797 Test: blockdev comparev and writev ...[2024-12-11 14:50:35.694537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.694565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.694579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.694587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.694834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.694845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.694861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.694868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.695107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.695117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.695128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.695136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.695394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.695405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:42.797 [2024-12-11 14:50:35.695417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:42.797 [2024-12-11 14:50:35.695425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:42.797 passed 00:09:42.797 Test: blockdev nvme passthru rw ...passed 00:09:42.797 Test: blockdev nvme passthru vendor specific ...[2024-12-11 14:50:35.777508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.797 [2024-12-11 14:50:35.777528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:42.798 [2024-12-11 14:50:35.777633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.798 [2024-12-11 14:50:35.777644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:42.798 [2024-12-11 14:50:35.777762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.798 [2024-12-11 14:50:35.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:42.798 [2024-12-11 14:50:35.777887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:42.798 [2024-12-11 14:50:35.777897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:42.798 passed 00:09:42.798 Test: blockdev nvme admin passthru ...passed 00:09:42.798 Test: blockdev copy ...passed 00:09:42.798 00:09:42.798 Run Summary: Type Total Ran Passed Failed Inactive 00:09:42.798 suites 1 1 n/a 0 0 00:09:42.798 tests 23 23 23 0 0 00:09:42.798 asserts 152 152 152 0 n/a 00:09:42.798 00:09:42.798 Elapsed time = 1.121 seconds 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.057 14:50:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.057 rmmod nvme_tcp 00:09:43.057 rmmod nvme_fabrics 00:09:43.057 rmmod nvme_keyring 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3006290 ']' 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3006290 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3006290 ']' 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3006290 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006290 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006290' 00:09:43.057 killing process with pid 3006290 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3006290 00:09:43.057 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3006290 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.316 14:50:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.852 14:50:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.852 00:09:45.852 real 0m10.106s 00:09:45.852 user 0m10.728s 00:09:45.852 sys 0m4.989s 00:09:45.852 14:50:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.852 14:50:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:45.852 ************************************ 00:09:45.852 END TEST nvmf_bdevio 00:09:45.852 ************************************ 00:09:45.852 14:50:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:45.852 00:09:45.852 real 4m35.084s 00:09:45.852 user 10m24.436s 00:09:45.852 sys 1m39.368s 00:09:45.852 14:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.853 ************************************ 00:09:45.853 END TEST nvmf_target_core 00:09:45.853 ************************************ 00:09:45.853 14:50:38 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.853 14:50:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.853 14:50:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.853 14:50:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:45.853 ************************************ 00:09:45.853 START TEST nvmf_target_extra 00:09:45.853 ************************************ 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:45.853 * Looking for test storage... 00:09:45.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.853 --rc genhtml_branch_coverage=1 00:09:45.853 --rc genhtml_function_coverage=1 00:09:45.853 --rc genhtml_legend=1 00:09:45.853 --rc geninfo_all_blocks=1 00:09:45.853 --rc geninfo_unexecuted_blocks=1 00:09:45.853 00:09:45.853 ' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.853 --rc genhtml_branch_coverage=1 00:09:45.853 --rc genhtml_function_coverage=1 00:09:45.853 --rc genhtml_legend=1 00:09:45.853 --rc geninfo_all_blocks=1 00:09:45.853 --rc geninfo_unexecuted_blocks=1 00:09:45.853 00:09:45.853 ' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.853 --rc genhtml_branch_coverage=1 00:09:45.853 --rc genhtml_function_coverage=1 00:09:45.853 --rc genhtml_legend=1 00:09:45.853 --rc geninfo_all_blocks=1 00:09:45.853 --rc geninfo_unexecuted_blocks=1 00:09:45.853 00:09:45.853 ' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.853 --rc genhtml_branch_coverage=1 00:09:45.853 --rc genhtml_function_coverage=1 00:09:45.853 --rc genhtml_legend=1 00:09:45.853 --rc geninfo_all_blocks=1 00:09:45.853 --rc geninfo_unexecuted_blocks=1 00:09:45.853 00:09:45.853 ' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.853 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:45.854 ************************************ 00:09:45.854 START TEST nvmf_example 00:09:45.854 ************************************ 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:45.854 * Looking for test storage... 00:09:45.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.854 --rc genhtml_branch_coverage=1 00:09:45.854 --rc genhtml_function_coverage=1 00:09:45.854 --rc genhtml_legend=1 00:09:45.854 --rc geninfo_all_blocks=1 00:09:45.854 --rc geninfo_unexecuted_blocks=1 00:09:45.854 00:09:45.854 ' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.854 --rc genhtml_branch_coverage=1 00:09:45.854 --rc genhtml_function_coverage=1 00:09:45.854 --rc genhtml_legend=1 00:09:45.854 --rc geninfo_all_blocks=1 00:09:45.854 --rc geninfo_unexecuted_blocks=1 00:09:45.854 00:09:45.854 ' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.854 --rc genhtml_branch_coverage=1 00:09:45.854 --rc genhtml_function_coverage=1 00:09:45.854 --rc genhtml_legend=1 00:09:45.854 --rc geninfo_all_blocks=1 00:09:45.854 --rc geninfo_unexecuted_blocks=1 00:09:45.854 00:09:45.854 ' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.854 --rc genhtml_branch_coverage=1 00:09:45.854 --rc genhtml_function_coverage=1 00:09:45.854 --rc genhtml_legend=1 00:09:45.854 --rc geninfo_all_blocks=1 00:09:45.854 --rc geninfo_unexecuted_blocks=1 00:09:45.854 00:09:45.854 ' 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.854 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:46.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:46.114 14:50:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:52.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:52.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.686 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:52.687 Found net devices under 0000:86:00.0: cvl_0_0 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:52.687 Found net devices under 0000:86:00.1: cvl_0_1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:52.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:09:52.687 00:09:52.687 --- 10.0.0.2 ping statistics --- 00:09:52.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.687 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:09:52.687 00:09:52.687 --- 10.0.0.1 ping statistics --- 00:09:52.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.687 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3010196 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3010196 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3010196 ']' 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.687 14:50:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.946 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.946 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:52.946 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:52.946 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:52.946 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:09:52.947 14:50:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:05.151 Initializing NVMe Controllers 00:10:05.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.151 Initialization complete. Launching workers. 00:10:05.151 ======================================================== 00:10:05.151 Latency(us) 00:10:05.151 Device Information : IOPS MiB/s Average min max 00:10:05.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18049.83 70.51 3546.09 702.47 17061.56 00:10:05.151 ======================================================== 00:10:05.151 Total : 18049.83 70.51 3546.09 702.47 17061.56 00:10:05.151 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:05.151 rmmod nvme_tcp 00:10:05.151 rmmod nvme_fabrics 00:10:05.151 rmmod nvme_keyring 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:05.151 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 3010196 ']' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 3010196 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3010196 ']' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3010196 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010196 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010196' 00:10:05.152 killing process with pid 3010196 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3010196 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3010196 00:10:05.152 nvmf threads initialize successfully 00:10:05.152 bdev subsystem init successfully 00:10:05.152 created a nvmf target service 00:10:05.152 create targets's poll groups done 00:10:05.152 all subsystems of target started 00:10:05.152 nvmf target is running 00:10:05.152 all subsystems of target stopped 00:10:05.152 destroy targets's poll groups done 00:10:05.152 destroyed the nvmf target service 00:10:05.152 bdev subsystem finish successfully 00:10:05.152 nvmf threads destroy successfully 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.152 14:50:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:05.720 00:10:05.720 real 0m19.823s 00:10:05.720 user 0m45.976s 00:10:05.720 sys 0m6.124s 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:05.720 ************************************ 00:10:05.720 END TEST nvmf_example 00:10:05.720 ************************************ 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:05.720 ************************************ 00:10:05.720 START TEST nvmf_filesystem 00:10:05.720 ************************************ 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:05.720 * Looking for test storage... 00:10:05.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.720 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.983 --rc genhtml_branch_coverage=1 00:10:05.983 --rc genhtml_function_coverage=1 00:10:05.983 --rc genhtml_legend=1 00:10:05.983 --rc geninfo_all_blocks=1 00:10:05.983 --rc geninfo_unexecuted_blocks=1 00:10:05.983 00:10:05.983 ' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.983 --rc genhtml_branch_coverage=1 00:10:05.983 --rc genhtml_function_coverage=1 00:10:05.983 --rc genhtml_legend=1 00:10:05.983 --rc geninfo_all_blocks=1 00:10:05.983 --rc geninfo_unexecuted_blocks=1 00:10:05.983 00:10:05.983 ' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.983 --rc genhtml_branch_coverage=1 00:10:05.983 --rc genhtml_function_coverage=1 00:10:05.983 --rc genhtml_legend=1 00:10:05.983 --rc geninfo_all_blocks=1 00:10:05.983 --rc geninfo_unexecuted_blocks=1 00:10:05.983 00:10:05.983 ' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.983 --rc genhtml_branch_coverage=1 00:10:05.983 --rc genhtml_function_coverage=1 00:10:05.983 --rc genhtml_legend=1 00:10:05.983 --rc geninfo_all_blocks=1 00:10:05.983 --rc geninfo_unexecuted_blocks=1 00:10:05.983 00:10:05.983 ' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output ']' 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh ]] 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:05.983 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/config.h ]] 00:10:05.984 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:05.984 #define SPDK_CONFIG_H 00:10:05.984 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:05.984 #define SPDK_CONFIG_APPS 1 00:10:05.984 #define SPDK_CONFIG_ARCH native 00:10:05.984 #undef SPDK_CONFIG_ASAN 00:10:05.984 #undef SPDK_CONFIG_AVAHI 00:10:05.984 #undef SPDK_CONFIG_CET 00:10:05.984 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:05.984 #define SPDK_CONFIG_COVERAGE 1 00:10:05.984 #define SPDK_CONFIG_CROSS_PREFIX 00:10:05.984 #undef SPDK_CONFIG_CRYPTO 00:10:05.984 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:05.984 #undef SPDK_CONFIG_CUSTOMOCF 00:10:05.984 #undef SPDK_CONFIG_DAOS 00:10:05.984 #define SPDK_CONFIG_DAOS_DIR 00:10:05.984 #define SPDK_CONFIG_DEBUG 1 00:10:05.984 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:05.984 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:10:05.984 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:05.984 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:05.984 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:05.984 #undef SPDK_CONFIG_DPDK_UADK 00:10:05.984 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:10:05.984 #define SPDK_CONFIG_EXAMPLES 1 00:10:05.984 #undef SPDK_CONFIG_FC 00:10:05.984 #define SPDK_CONFIG_FC_PATH 00:10:05.984 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:05.984 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:05.984 #define SPDK_CONFIG_FSDEV 1 00:10:05.984 #undef SPDK_CONFIG_FUSE 00:10:05.984 #undef SPDK_CONFIG_FUZZER 00:10:05.984 #define SPDK_CONFIG_FUZZER_LIB 00:10:05.984 #undef SPDK_CONFIG_GOLANG 00:10:05.984 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:05.984 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:05.984 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:05.984 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:05.984 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:05.984 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:05.984 #undef SPDK_CONFIG_HAVE_LZ4 00:10:05.984 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:05.984 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:05.984 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:05.984 #define SPDK_CONFIG_IDXD 1 00:10:05.984 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:05.984 #undef SPDK_CONFIG_IPSEC_MB 00:10:05.984 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:05.984 #define SPDK_CONFIG_ISAL 1 00:10:05.984 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:05.984 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:05.984 #define SPDK_CONFIG_LIBDIR 00:10:05.984 #undef SPDK_CONFIG_LTO 00:10:05.984 #define SPDK_CONFIG_MAX_LCORES 128 00:10:05.984 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:05.984 #define SPDK_CONFIG_NVME_CUSE 1 00:10:05.984 #undef SPDK_CONFIG_OCF 00:10:05.984 #define SPDK_CONFIG_OCF_PATH 00:10:05.984 #define SPDK_CONFIG_OPENSSL_PATH 00:10:05.984 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:05.984 #define SPDK_CONFIG_PGO_DIR 00:10:05.984 #undef SPDK_CONFIG_PGO_USE 00:10:05.984 #define SPDK_CONFIG_PREFIX /usr/local 00:10:05.984 #undef SPDK_CONFIG_RAID5F 00:10:05.984 #undef SPDK_CONFIG_RBD 00:10:05.984 #define SPDK_CONFIG_RDMA 1 00:10:05.984 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:05.984 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:05.984 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:05.984 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:05.984 #define SPDK_CONFIG_SHARED 1 00:10:05.984 #undef SPDK_CONFIG_SMA 00:10:05.984 #define SPDK_CONFIG_TESTS 1 00:10:05.984 #undef SPDK_CONFIG_TSAN 00:10:05.984 #define SPDK_CONFIG_UBLK 1 00:10:05.984 #define SPDK_CONFIG_UBSAN 1 00:10:05.984 #undef SPDK_CONFIG_UNIT_TESTS 00:10:05.984 #undef SPDK_CONFIG_URING 00:10:05.984 #define SPDK_CONFIG_URING_PATH 00:10:05.984 #undef SPDK_CONFIG_URING_ZNS 00:10:05.984 #undef SPDK_CONFIG_USDT 00:10:05.984 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:05.984 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:05.984 #define SPDK_CONFIG_VFIO_USER 1 00:10:05.984 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:05.984 #define SPDK_CONFIG_VHOST 1 00:10:05.984 #define SPDK_CONFIG_VIRTIO 1 00:10:05.984 #undef SPDK_CONFIG_VTUNE 00:10:05.984 #define SPDK_CONFIG_VTUNE_DIR 00:10:05.984 #define SPDK_CONFIG_WERROR 1 00:10:05.985 #define SPDK_CONFIG_WPDK_DIR 00:10:05.985 #undef SPDK_CONFIG_XNVME 00:10:05.985 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/../../../ 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.run_test_name 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power ]] 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:05.985 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.986 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3012600 ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3012600 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.8U7pT9 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target /tmp/spdk.8U7pT9/tests/target /tmp/spdk.8U7pT9 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=193989521408 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=201248804864 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7259283456 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100614369280 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624400384 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.987 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=40226734080 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=40249761792 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23027712 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100624003072 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624404480 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=401408 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20124864512 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20124876800 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:05.988 * Looking for test storage... 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=193989521408 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9473875968 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.988 14:50:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.988 --rc genhtml_branch_coverage=1 00:10:05.988 --rc genhtml_function_coverage=1 00:10:05.988 --rc genhtml_legend=1 00:10:05.988 --rc geninfo_all_blocks=1 00:10:05.988 --rc geninfo_unexecuted_blocks=1 00:10:05.988 00:10:05.988 ' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.988 --rc genhtml_branch_coverage=1 00:10:05.988 --rc genhtml_function_coverage=1 00:10:05.988 --rc genhtml_legend=1 00:10:05.988 --rc geninfo_all_blocks=1 00:10:05.988 --rc geninfo_unexecuted_blocks=1 00:10:05.988 00:10:05.988 ' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.988 --rc genhtml_branch_coverage=1 00:10:05.988 --rc genhtml_function_coverage=1 00:10:05.988 --rc genhtml_legend=1 00:10:05.988 --rc geninfo_all_blocks=1 00:10:05.988 --rc geninfo_unexecuted_blocks=1 00:10:05.988 00:10:05.988 ' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.988 --rc genhtml_branch_coverage=1 00:10:05.988 --rc genhtml_function_coverage=1 00:10:05.988 --rc genhtml_legend=1 00:10:05.988 --rc geninfo_all_blocks=1 00:10:05.988 --rc geninfo_unexecuted_blocks=1 00:10:05.988 00:10:05.988 ' 00:10:05.988 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.248 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:06.249 14:50:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.819 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:12.820 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:12.820 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:12.820 Found net devices under 0000:86:00.0: cvl_0_0 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:12.820 Found net devices under 0000:86:00.1: cvl_0_1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:10:12.820 00:10:12.820 --- 10.0.0.2 ping statistics --- 00:10:12.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.820 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:10:12.820 00:10:12.820 --- 10.0.0.1 ping statistics --- 00:10:12.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.820 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.820 14:51:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.820 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 ************************************ 00:10:12.821 START TEST nvmf_filesystem_no_in_capsule 00:10:12.821 ************************************ 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3015768 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3015768 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3015768 ']' 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 [2024-12-11 14:51:05.136976] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:12.821 [2024-12-11 14:51:05.137018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.821 [2024-12-11 14:51:05.217520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.821 [2024-12-11 14:51:05.258518] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.821 [2024-12-11 14:51:05.258558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.821 [2024-12-11 14:51:05.258566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.821 [2024-12-11 14:51:05.258572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.821 [2024-12-11 14:51:05.258577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.821 [2024-12-11 14:51:05.260007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.821 [2024-12-11 14:51:05.260114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.821 [2024-12-11 14:51:05.260224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.821 [2024-12-11 14:51:05.260225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 [2024-12-11 14:51:05.406168] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 [2024-12-11 14:51:05.563341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:12.821 { 00:10:12.821 "name": "Malloc1", 00:10:12.821 "aliases": [ 00:10:12.821 "42d67dbf-ff4c-442a-90ba-20f80ae35052" 00:10:12.821 ], 00:10:12.821 "product_name": "Malloc disk", 00:10:12.821 "block_size": 512, 00:10:12.821 "num_blocks": 1048576, 00:10:12.821 "uuid": "42d67dbf-ff4c-442a-90ba-20f80ae35052", 00:10:12.821 "assigned_rate_limits": { 00:10:12.821 "rw_ios_per_sec": 0, 00:10:12.821 "rw_mbytes_per_sec": 0, 00:10:12.821 "r_mbytes_per_sec": 0, 00:10:12.821 "w_mbytes_per_sec": 0 00:10:12.821 }, 00:10:12.821 "claimed": true, 00:10:12.821 "claim_type": "exclusive_write", 00:10:12.821 "zoned": false, 00:10:12.821 "supported_io_types": { 00:10:12.821 "read": true, 00:10:12.821 "write": true, 00:10:12.821 "unmap": true, 00:10:12.821 "flush": true, 00:10:12.821 "reset": true, 00:10:12.821 "nvme_admin": false, 00:10:12.821 "nvme_io": false, 00:10:12.821 "nvme_io_md": false, 00:10:12.821 "write_zeroes": true, 00:10:12.821 "zcopy": true, 00:10:12.821 "get_zone_info": false, 00:10:12.821 "zone_management": false, 00:10:12.821 "zone_append": false, 00:10:12.821 "compare": false, 00:10:12.821 "compare_and_write": false, 00:10:12.821 "abort": true, 00:10:12.821 "seek_hole": false, 00:10:12.821 "seek_data": false, 00:10:12.821 "copy": true, 00:10:12.821 "nvme_iov_md": false 00:10:12.821 }, 00:10:12.821 "memory_domains": [ 00:10:12.821 { 00:10:12.821 "dma_device_id": "system", 00:10:12.821 "dma_device_type": 1 00:10:12.821 }, 00:10:12.821 { 00:10:12.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.821 "dma_device_type": 2 00:10:12.821 } 00:10:12.821 ], 00:10:12.821 "driver_specific": {} 00:10:12.821 } 00:10:12.821 ]' 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:12.821 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:12.822 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:12.822 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:12.822 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:12.822 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:12.822 14:51:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:14.202 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:14.202 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:14.202 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:14.202 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:14.202 14:51:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:16.107 14:51:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:16.107 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.675 14:51:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.612 ************************************ 00:10:17.612 START TEST filesystem_ext4 00:10:17.612 ************************************ 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:17.612 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.612 mke2fs 1.47.0 (5-Feb-2023) 00:10:17.612 Discarding device blocks: 0/522240 done 00:10:17.871 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.871 Filesystem UUID: dc5a8a8d-ad2f-477c-81f8-1ee0ad3e7533 00:10:17.871 Superblock backups stored on blocks: 00:10:17.871 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.871 00:10:17.871 Allocating group tables: 0/64 done 00:10:17.871 Writing inode tables: 0/64 done 00:10:17.871 Creating journal (8192 blocks): done 00:10:18.129 Writing superblocks and filesystem accounting information: 0/64 done 00:10:18.129 00:10:18.129 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:18.129 14:51:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.696 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.696 14:51:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3015768 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.696 00:10:24.696 real 0m6.496s 00:10:24.696 user 0m0.028s 00:10:24.696 sys 0m0.069s 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:24.696 ************************************ 00:10:24.696 END TEST filesystem_ext4 00:10:24.696 ************************************ 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.696 ************************************ 00:10:24.696 START TEST filesystem_btrfs 00:10:24.696 ************************************ 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:24.696 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:24.696 btrfs-progs v6.8.1 00:10:24.696 See https://btrfs.readthedocs.io for more information. 00:10:24.696 00:10:24.696 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:24.696 NOTE: several default settings have changed in version 5.15, please make sure 00:10:24.696 this does not affect your deployments: 00:10:24.696 - DUP for metadata (-m dup) 00:10:24.696 - enabled no-holes (-O no-holes) 00:10:24.696 - enabled free-space-tree (-R free-space-tree) 00:10:24.696 00:10:24.696 Label: (null) 00:10:24.696 UUID: 27947e89-931d-4489-b94e-75b6061d8c86 00:10:24.696 Node size: 16384 00:10:24.696 Sector size: 4096 (CPU page size: 4096) 00:10:24.696 Filesystem size: 510.00MiB 00:10:24.696 Block group profiles: 00:10:24.696 Data: single 8.00MiB 00:10:24.696 Metadata: DUP 32.00MiB 00:10:24.696 System: DUP 8.00MiB 00:10:24.696 SSD detected: yes 00:10:24.696 Zoned device: no 00:10:24.696 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:24.696 Checksum: crc32c 00:10:24.696 Number of devices: 1 00:10:24.696 Devices: 00:10:24.696 ID SIZE PATH 00:10:24.696 1 510.00MiB /dev/nvme0n1p1 00:10:24.696 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3015768 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:24.697 00:10:24.697 real 0m0.574s 00:10:24.697 user 0m0.023s 00:10:24.697 sys 0m0.109s 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.697 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:24.697 ************************************ 00:10:24.697 END TEST filesystem_btrfs 00:10:24.697 ************************************ 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.956 ************************************ 00:10:24.956 START TEST filesystem_xfs 00:10:24.956 ************************************ 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:24.956 14:51:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:24.956 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:24.956 = sectsz=512 attr=2, projid32bit=1 00:10:24.956 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:24.956 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:24.956 data = bsize=4096 blocks=130560, imaxpct=25 00:10:24.956 = sunit=0 swidth=0 blks 00:10:24.956 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:24.956 log =internal log bsize=4096 blocks=16384, version=2 00:10:24.956 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:24.956 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:25.891 Discarding blocks...Done. 00:10:25.891 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:25.891 14:51:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3015768 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:27.795 00:10:27.795 real 0m2.982s 00:10:27.795 user 0m0.028s 00:10:27.795 sys 0m0.071s 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:27.795 ************************************ 00:10:27.795 END TEST filesystem_xfs 00:10:27.795 ************************************ 00:10:27.795 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:28.054 14:51:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3015768 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3015768 ']' 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3015768 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3015768 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:28.054 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:28.055 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3015768' 00:10:28.055 killing process with pid 3015768 00:10:28.055 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3015768 00:10:28.055 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3015768 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:28.623 00:10:28.623 real 0m16.308s 00:10:28.623 user 1m4.126s 00:10:28.623 sys 0m1.377s 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.623 ************************************ 00:10:28.623 END TEST nvmf_filesystem_no_in_capsule 00:10:28.623 ************************************ 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:28.623 ************************************ 00:10:28.623 START TEST nvmf_filesystem_in_capsule 00:10:28.623 ************************************ 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=3019143 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 3019143 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3019143 ']' 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.623 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.623 [2024-12-11 14:51:21.520857] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:28.623 [2024-12-11 14:51:21.520900] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.623 [2024-12-11 14:51:21.599461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.623 [2024-12-11 14:51:21.636755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.623 [2024-12-11 14:51:21.636795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.623 [2024-12-11 14:51:21.636802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.623 [2024-12-11 14:51:21.636808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.623 [2024-12-11 14:51:21.636813] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.623 [2024-12-11 14:51:21.638345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.623 [2024-12-11 14:51:21.638454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.623 [2024-12-11 14:51:21.638540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.623 [2024-12-11 14:51:21.638541] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.883 [2024-12-11 14:51:21.784843] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.883 Malloc1 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.883 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 [2024-12-11 14:51:21.941351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:29.143 { 00:10:29.143 "name": "Malloc1", 00:10:29.143 "aliases": [ 00:10:29.143 "68d05168-d412-476f-bb39-b26f7318d347" 00:10:29.143 ], 00:10:29.143 "product_name": "Malloc disk", 00:10:29.143 "block_size": 512, 00:10:29.143 "num_blocks": 1048576, 00:10:29.143 "uuid": "68d05168-d412-476f-bb39-b26f7318d347", 00:10:29.143 "assigned_rate_limits": { 00:10:29.143 "rw_ios_per_sec": 0, 00:10:29.143 "rw_mbytes_per_sec": 0, 00:10:29.143 "r_mbytes_per_sec": 0, 00:10:29.143 "w_mbytes_per_sec": 0 00:10:29.143 }, 00:10:29.143 "claimed": true, 00:10:29.143 "claim_type": "exclusive_write", 00:10:29.143 "zoned": false, 00:10:29.143 "supported_io_types": { 00:10:29.143 "read": true, 00:10:29.143 "write": true, 00:10:29.143 "unmap": true, 00:10:29.143 "flush": true, 00:10:29.143 "reset": true, 00:10:29.143 "nvme_admin": false, 00:10:29.143 "nvme_io": false, 00:10:29.143 "nvme_io_md": false, 00:10:29.143 "write_zeroes": true, 00:10:29.143 "zcopy": true, 00:10:29.143 "get_zone_info": false, 00:10:29.143 "zone_management": false, 00:10:29.143 "zone_append": false, 00:10:29.143 "compare": false, 00:10:29.143 "compare_and_write": false, 00:10:29.143 "abort": true, 00:10:29.143 "seek_hole": false, 00:10:29.143 "seek_data": false, 00:10:29.143 "copy": true, 00:10:29.143 "nvme_iov_md": false 00:10:29.143 }, 00:10:29.143 "memory_domains": [ 00:10:29.143 { 00:10:29.143 "dma_device_id": "system", 00:10:29.143 "dma_device_type": 1 00:10:29.143 }, 00:10:29.143 { 00:10:29.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.143 "dma_device_type": 2 00:10:29.143 } 00:10:29.143 ], 00:10:29.143 "driver_specific": {} 00:10:29.143 } 00:10:29.143 ]' 00:10:29.143 14:51:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:29.143 14:51:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:30.520 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:30.520 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:30.520 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:30.520 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:30.520 14:51:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:32.443 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:32.444 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:32.756 14:51:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:33.069 14:51:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.447 ************************************ 00:10:34.447 START TEST filesystem_in_capsule_ext4 00:10:34.447 ************************************ 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:34.447 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:34.447 mke2fs 1.47.0 (5-Feb-2023) 00:10:34.447 Discarding device blocks: 0/522240 done 00:10:34.447 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:34.447 Filesystem UUID: 9d1dca2e-c949-45f8-909d-32b540ee0935 00:10:34.447 Superblock backups stored on blocks: 00:10:34.447 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:34.447 00:10:34.447 Allocating group tables: 0/64 done 00:10:34.447 Writing inode tables: 0/64 done 00:10:35.015 Creating journal (8192 blocks): done 00:10:35.015 Writing superblocks and filesystem accounting information: 0/64 done 00:10:35.015 00:10:35.015 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:35.015 14:51:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3019143 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:41.580 14:51:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:41.580 00:10:41.580 real 0m6.916s 00:10:41.580 user 0m0.025s 00:10:41.580 sys 0m0.073s 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 ************************************ 00:10:41.580 END TEST filesystem_in_capsule_ext4 00:10:41.580 ************************************ 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:41.580 ************************************ 00:10:41.580 START TEST filesystem_in_capsule_btrfs 00:10:41.580 ************************************ 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:41.580 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:41.581 btrfs-progs v6.8.1 00:10:41.581 See https://btrfs.readthedocs.io for more information. 00:10:41.581 00:10:41.581 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:41.581 NOTE: several default settings have changed in version 5.15, please make sure 00:10:41.581 this does not affect your deployments: 00:10:41.581 - DUP for metadata (-m dup) 00:10:41.581 - enabled no-holes (-O no-holes) 00:10:41.581 - enabled free-space-tree (-R free-space-tree) 00:10:41.581 00:10:41.581 Label: (null) 00:10:41.581 UUID: ab5dda6d-11af-4a1a-9716-eee1709596a1 00:10:41.581 Node size: 16384 00:10:41.581 Sector size: 4096 (CPU page size: 4096) 00:10:41.581 Filesystem size: 510.00MiB 00:10:41.581 Block group profiles: 00:10:41.581 Data: single 8.00MiB 00:10:41.581 Metadata: DUP 32.00MiB 00:10:41.581 System: DUP 8.00MiB 00:10:41.581 SSD detected: yes 00:10:41.581 Zoned device: no 00:10:41.581 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:41.581 Checksum: crc32c 00:10:41.581 Number of devices: 1 00:10:41.581 Devices: 00:10:41.581 ID SIZE PATH 00:10:41.581 1 510.00MiB /dev/nvme0n1p1 00:10:41.581 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:41.581 14:51:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3019143 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:42.519 00:10:42.519 real 0m1.322s 00:10:42.519 user 0m0.027s 00:10:42.519 sys 0m0.114s 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:42.519 ************************************ 00:10:42.519 END TEST filesystem_in_capsule_btrfs 00:10:42.519 ************************************ 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:42.519 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:42.520 ************************************ 00:10:42.520 START TEST filesystem_in_capsule_xfs 00:10:42.520 ************************************ 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:42.520 14:51:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:42.520 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:42.520 = sectsz=512 attr=2, projid32bit=1 00:10:42.520 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:42.520 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:42.520 data = bsize=4096 blocks=130560, imaxpct=25 00:10:42.520 = sunit=0 swidth=0 blks 00:10:42.520 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:42.520 log =internal log bsize=4096 blocks=16384, version=2 00:10:42.520 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:42.520 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:43.901 Discarding blocks...Done. 00:10:43.901 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:43.901 14:51:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:45.276 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:45.276 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:45.277 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:45.277 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3019143 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:45.535 00:10:45.535 real 0m2.884s 00:10:45.535 user 0m0.026s 00:10:45.535 sys 0m0.071s 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:45.535 ************************************ 00:10:45.535 END TEST filesystem_in_capsule_xfs 00:10:45.535 ************************************ 00:10:45.535 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:45.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3019143 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3019143 ']' 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3019143 00:10:45.794 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:45.795 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.795 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3019143 00:10:46.053 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:46.053 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:46.053 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3019143' 00:10:46.053 killing process with pid 3019143 00:10:46.053 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3019143 00:10:46.053 14:51:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3019143 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:46.313 00:10:46.313 real 0m17.721s 00:10:46.313 user 1m9.687s 00:10:46.313 sys 0m1.453s 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.313 ************************************ 00:10:46.313 END TEST nvmf_filesystem_in_capsule 00:10:46.313 ************************************ 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.313 rmmod nvme_tcp 00:10:46.313 rmmod nvme_fabrics 00:10:46.313 rmmod nvme_keyring 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.313 14:51:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.849 00:10:48.849 real 0m42.736s 00:10:48.849 user 2m15.905s 00:10:48.849 sys 0m7.495s 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:48.849 ************************************ 00:10:48.849 END TEST nvmf_filesystem 00:10:48.849 ************************************ 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.849 ************************************ 00:10:48.849 START TEST nvmf_target_discovery 00:10:48.849 ************************************ 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:48.849 * Looking for test storage... 00:10:48.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.849 --rc genhtml_branch_coverage=1 00:10:48.849 --rc genhtml_function_coverage=1 00:10:48.849 --rc genhtml_legend=1 00:10:48.849 --rc geninfo_all_blocks=1 00:10:48.849 --rc geninfo_unexecuted_blocks=1 00:10:48.849 00:10:48.849 ' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.849 --rc genhtml_branch_coverage=1 00:10:48.849 --rc genhtml_function_coverage=1 00:10:48.849 --rc genhtml_legend=1 00:10:48.849 --rc geninfo_all_blocks=1 00:10:48.849 --rc geninfo_unexecuted_blocks=1 00:10:48.849 00:10:48.849 ' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.849 --rc genhtml_branch_coverage=1 00:10:48.849 --rc genhtml_function_coverage=1 00:10:48.849 --rc genhtml_legend=1 00:10:48.849 --rc geninfo_all_blocks=1 00:10:48.849 --rc geninfo_unexecuted_blocks=1 00:10:48.849 00:10:48.849 ' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.849 --rc genhtml_branch_coverage=1 00:10:48.849 --rc genhtml_function_coverage=1 00:10:48.849 --rc genhtml_legend=1 00:10:48.849 --rc geninfo_all_blocks=1 00:10:48.849 --rc geninfo_unexecuted_blocks=1 00:10:48.849 00:10:48.849 ' 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.849 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.850 14:51:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:55.419 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:55.419 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:55.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:55.420 Found net devices under 0000:86:00.0: cvl_0_0 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:55.420 Found net devices under 0000:86:00.1: cvl_0_1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:10:55.420 00:10:55.420 --- 10.0.0.2 ping statistics --- 00:10:55.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.420 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:10:55.420 00:10:55.420 --- 10.0.0.1 ping statistics --- 00:10:55.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.420 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3025876 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3025876 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3025876 ']' 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.420 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.420 [2024-12-11 14:51:47.643515] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:10:55.420 [2024-12-11 14:51:47.643565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:55.420 [2024-12-11 14:51:47.722704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:55.420 [2024-12-11 14:51:47.764377] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:55.420 [2024-12-11 14:51:47.764412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:55.420 [2024-12-11 14:51:47.764419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:55.420 [2024-12-11 14:51:47.764426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:55.420 [2024-12-11 14:51:47.764431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:55.420 [2024-12-11 14:51:47.765855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:55.420 [2024-12-11 14:51:47.765967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:55.420 [2024-12-11 14:51:47.766073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.421 [2024-12-11 14:51:47.766074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 [2024-12-11 14:51:47.908355] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 Null1 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 [2024-12-11 14:51:47.970310] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 Null2 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 Null3 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 Null4 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.421 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:55.421 00:10:55.421 Discovery Log Number of Records 6, Generation counter 6 00:10:55.421 =====Discovery Log Entry 0====== 00:10:55.421 trtype: tcp 00:10:55.421 adrfam: ipv4 00:10:55.421 subtype: current discovery subsystem 00:10:55.421 treq: not required 00:10:55.421 portid: 0 00:10:55.421 trsvcid: 4420 00:10:55.421 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:55.421 traddr: 10.0.0.2 00:10:55.421 eflags: explicit discovery connections, duplicate discovery information 00:10:55.421 sectype: none 00:10:55.421 =====Discovery Log Entry 1====== 00:10:55.421 trtype: tcp 00:10:55.421 adrfam: ipv4 00:10:55.421 subtype: nvme subsystem 00:10:55.421 treq: not required 00:10:55.421 portid: 0 00:10:55.421 trsvcid: 4420 00:10:55.421 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:55.421 traddr: 10.0.0.2 00:10:55.421 eflags: none 00:10:55.421 sectype: none 00:10:55.421 =====Discovery Log Entry 2====== 00:10:55.422 trtype: tcp 00:10:55.422 adrfam: ipv4 00:10:55.422 subtype: nvme subsystem 00:10:55.422 treq: not required 00:10:55.422 portid: 0 00:10:55.422 trsvcid: 4420 00:10:55.422 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:55.422 traddr: 10.0.0.2 00:10:55.422 eflags: none 00:10:55.422 sectype: none 00:10:55.422 =====Discovery Log Entry 3====== 00:10:55.422 trtype: tcp 00:10:55.422 adrfam: ipv4 00:10:55.422 subtype: nvme subsystem 00:10:55.422 treq: not required 00:10:55.422 portid: 0 00:10:55.422 trsvcid: 4420 00:10:55.422 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:55.422 traddr: 10.0.0.2 00:10:55.422 eflags: none 00:10:55.422 sectype: none 00:10:55.422 =====Discovery Log Entry 4====== 00:10:55.422 trtype: tcp 00:10:55.422 adrfam: ipv4 00:10:55.422 subtype: nvme subsystem 00:10:55.422 treq: not required 00:10:55.422 portid: 0 00:10:55.422 trsvcid: 4420 00:10:55.422 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:55.422 traddr: 10.0.0.2 00:10:55.422 eflags: none 00:10:55.422 sectype: none 00:10:55.422 =====Discovery Log Entry 5====== 00:10:55.422 trtype: tcp 00:10:55.422 adrfam: ipv4 00:10:55.422 subtype: discovery subsystem referral 00:10:55.422 treq: not required 00:10:55.422 portid: 0 00:10:55.422 trsvcid: 4430 00:10:55.422 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:55.422 traddr: 10.0.0.2 00:10:55.422 eflags: none 00:10:55.422 sectype: none 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:55.422 Perform nvmf subsystem discovery via RPC 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 [ 00:10:55.422 { 00:10:55.422 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:55.422 "subtype": "Discovery", 00:10:55.422 "listen_addresses": [ 00:10:55.422 { 00:10:55.422 "trtype": "TCP", 00:10:55.422 "adrfam": "IPv4", 00:10:55.422 "traddr": "10.0.0.2", 00:10:55.422 "trsvcid": "4420" 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "allow_any_host": true, 00:10:55.422 "hosts": [] 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:55.422 "subtype": "NVMe", 00:10:55.422 "listen_addresses": [ 00:10:55.422 { 00:10:55.422 "trtype": "TCP", 00:10:55.422 "adrfam": "IPv4", 00:10:55.422 "traddr": "10.0.0.2", 00:10:55.422 "trsvcid": "4420" 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "allow_any_host": true, 00:10:55.422 "hosts": [], 00:10:55.422 "serial_number": "SPDK00000000000001", 00:10:55.422 "model_number": "SPDK bdev Controller", 00:10:55.422 "max_namespaces": 32, 00:10:55.422 "min_cntlid": 1, 00:10:55.422 "max_cntlid": 65519, 00:10:55.422 "namespaces": [ 00:10:55.422 { 00:10:55.422 "nsid": 1, 00:10:55.422 "bdev_name": "Null1", 00:10:55.422 "name": "Null1", 00:10:55.422 "nguid": "C4A9C8DDD49D4A9DA4777E5B99C8BA78", 00:10:55.422 "uuid": "c4a9c8dd-d49d-4a9d-a477-7e5b99c8ba78" 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:55.422 "subtype": "NVMe", 00:10:55.422 "listen_addresses": [ 00:10:55.422 { 00:10:55.422 "trtype": "TCP", 00:10:55.422 "adrfam": "IPv4", 00:10:55.422 "traddr": "10.0.0.2", 00:10:55.422 "trsvcid": "4420" 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "allow_any_host": true, 00:10:55.422 "hosts": [], 00:10:55.422 "serial_number": "SPDK00000000000002", 00:10:55.422 "model_number": "SPDK bdev Controller", 00:10:55.422 "max_namespaces": 32, 00:10:55.422 "min_cntlid": 1, 00:10:55.422 "max_cntlid": 65519, 00:10:55.422 "namespaces": [ 00:10:55.422 { 00:10:55.422 "nsid": 1, 00:10:55.422 "bdev_name": "Null2", 00:10:55.422 "name": "Null2", 00:10:55.422 "nguid": "27E3C8F27D10436687BCE80CC5B67B6D", 00:10:55.422 "uuid": "27e3c8f2-7d10-4366-87bc-e80cc5b67b6d" 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:55.422 "subtype": "NVMe", 00:10:55.422 "listen_addresses": [ 00:10:55.422 { 00:10:55.422 "trtype": "TCP", 00:10:55.422 "adrfam": "IPv4", 00:10:55.422 "traddr": "10.0.0.2", 00:10:55.422 "trsvcid": "4420" 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "allow_any_host": true, 00:10:55.422 "hosts": [], 00:10:55.422 "serial_number": "SPDK00000000000003", 00:10:55.422 "model_number": "SPDK bdev Controller", 00:10:55.422 "max_namespaces": 32, 00:10:55.422 "min_cntlid": 1, 00:10:55.422 "max_cntlid": 65519, 00:10:55.422 "namespaces": [ 00:10:55.422 { 00:10:55.422 "nsid": 1, 00:10:55.422 "bdev_name": "Null3", 00:10:55.422 "name": "Null3", 00:10:55.422 "nguid": "469238B9936E4879A95842BD43C1E45F", 00:10:55.422 "uuid": "469238b9-936e-4879-a958-42bd43c1e45f" 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 }, 00:10:55.422 { 00:10:55.422 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:55.422 "subtype": "NVMe", 00:10:55.422 "listen_addresses": [ 00:10:55.422 { 00:10:55.422 "trtype": "TCP", 00:10:55.422 "adrfam": "IPv4", 00:10:55.422 "traddr": "10.0.0.2", 00:10:55.422 "trsvcid": "4420" 00:10:55.422 } 00:10:55.422 ], 00:10:55.422 "allow_any_host": true, 00:10:55.422 "hosts": [], 00:10:55.422 "serial_number": "SPDK00000000000004", 00:10:55.422 "model_number": "SPDK bdev Controller", 00:10:55.422 "max_namespaces": 32, 00:10:55.422 "min_cntlid": 1, 00:10:55.422 "max_cntlid": 65519, 00:10:55.422 "namespaces": [ 00:10:55.422 { 00:10:55.422 "nsid": 1, 00:10:55.422 "bdev_name": "Null4", 00:10:55.422 "name": "Null4", 00:10:55.422 "nguid": "90D32877AEF5465EBDC4426046DAB895", 00:10:55.422 "uuid": "90d32877-aef5-465e-bdc4-426046dab895" 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 } 00:10:55.422 ] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.422 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:55.423 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:55.423 rmmod nvme_tcp 00:10:55.423 rmmod nvme_fabrics 00:10:55.423 rmmod nvme_keyring 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3025876 ']' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3025876 ']' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3025876' 00:10:55.682 killing process with pid 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3025876 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.682 14:51:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:58.216 00:10:58.216 real 0m9.347s 00:10:58.216 user 0m5.624s 00:10:58.216 sys 0m4.824s 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:58.216 ************************************ 00:10:58.216 END TEST nvmf_target_discovery 00:10:58.216 ************************************ 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:58.216 ************************************ 00:10:58.216 START TEST nvmf_referrals 00:10:58.216 ************************************ 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:58.216 * Looking for test storage... 00:10:58.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:58.216 14:51:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:58.216 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.217 --rc genhtml_branch_coverage=1 00:10:58.217 --rc genhtml_function_coverage=1 00:10:58.217 --rc genhtml_legend=1 00:10:58.217 --rc geninfo_all_blocks=1 00:10:58.217 --rc geninfo_unexecuted_blocks=1 00:10:58.217 00:10:58.217 ' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.217 --rc genhtml_branch_coverage=1 00:10:58.217 --rc genhtml_function_coverage=1 00:10:58.217 --rc genhtml_legend=1 00:10:58.217 --rc geninfo_all_blocks=1 00:10:58.217 --rc geninfo_unexecuted_blocks=1 00:10:58.217 00:10:58.217 ' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.217 --rc genhtml_branch_coverage=1 00:10:58.217 --rc genhtml_function_coverage=1 00:10:58.217 --rc genhtml_legend=1 00:10:58.217 --rc geninfo_all_blocks=1 00:10:58.217 --rc geninfo_unexecuted_blocks=1 00:10:58.217 00:10:58.217 ' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:58.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:58.217 --rc genhtml_branch_coverage=1 00:10:58.217 --rc genhtml_function_coverage=1 00:10:58.217 --rc genhtml_legend=1 00:10:58.217 --rc geninfo_all_blocks=1 00:10:58.217 --rc geninfo_unexecuted_blocks=1 00:10:58.217 00:10:58.217 ' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:58.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:58.217 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:58.218 14:51:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:04.786 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:04.786 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:04.786 Found net devices under 0000:86:00.0: cvl_0_0 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:04.786 Found net devices under 0000:86:00.1: cvl_0_1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:04.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:11:04.786 00:11:04.786 --- 10.0.0.2 ping statistics --- 00:11:04.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.786 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:04.786 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:11:04.787 00:11:04.787 --- 10.0.0.1 ping statistics --- 00:11:04.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.787 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.787 14:51:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3029444 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3029444 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3029444 ']' 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 [2024-12-11 14:51:57.095547] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:11:04.787 [2024-12-11 14:51:57.095598] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.787 [2024-12-11 14:51:57.176364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.787 [2024-12-11 14:51:57.217072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.787 [2024-12-11 14:51:57.217111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.787 [2024-12-11 14:51:57.217119] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.787 [2024-12-11 14:51:57.217125] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.787 [2024-12-11 14:51:57.217130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.787 [2024-12-11 14:51:57.218692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.787 [2024-12-11 14:51:57.218800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.787 [2024-12-11 14:51:57.218906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.787 [2024-12-11 14:51:57.218907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 [2024-12-11 14:51:57.369053] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 [2024-12-11 14:51:57.392324] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.787 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.788 14:51:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.046 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.304 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.563 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:05.821 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.079 14:51:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:06.079 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:06.079 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:06.079 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:06.079 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:06.079 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.080 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:06.338 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:06.596 rmmod nvme_tcp 00:11:06.596 rmmod nvme_fabrics 00:11:06.596 rmmod nvme_keyring 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3029444 ']' 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3029444 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3029444 ']' 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3029444 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.596 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3029444 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3029444' 00:11:06.855 killing process with pid 3029444 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3029444 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3029444 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.855 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.856 14:51:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:09.392 00:11:09.392 real 0m11.059s 00:11:09.392 user 0m13.006s 00:11:09.392 sys 0m5.257s 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:09.392 ************************************ 00:11:09.392 END TEST nvmf_referrals 00:11:09.392 ************************************ 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.392 ************************************ 00:11:09.392 START TEST nvmf_connect_disconnect 00:11:09.392 ************************************ 00:11:09.392 14:52:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:09.392 * Looking for test storage... 00:11:09.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.392 --rc genhtml_branch_coverage=1 00:11:09.392 --rc genhtml_function_coverage=1 00:11:09.392 --rc genhtml_legend=1 00:11:09.392 --rc geninfo_all_blocks=1 00:11:09.392 --rc geninfo_unexecuted_blocks=1 00:11:09.392 00:11:09.392 ' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.392 --rc genhtml_branch_coverage=1 00:11:09.392 --rc genhtml_function_coverage=1 00:11:09.392 --rc genhtml_legend=1 00:11:09.392 --rc geninfo_all_blocks=1 00:11:09.392 --rc geninfo_unexecuted_blocks=1 00:11:09.392 00:11:09.392 ' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.392 --rc genhtml_branch_coverage=1 00:11:09.392 --rc genhtml_function_coverage=1 00:11:09.392 --rc genhtml_legend=1 00:11:09.392 --rc geninfo_all_blocks=1 00:11:09.392 --rc geninfo_unexecuted_blocks=1 00:11:09.392 00:11:09.392 ' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.392 --rc genhtml_branch_coverage=1 00:11:09.392 --rc genhtml_function_coverage=1 00:11:09.392 --rc genhtml_legend=1 00:11:09.392 --rc geninfo_all_blocks=1 00:11:09.392 --rc geninfo_unexecuted_blocks=1 00:11:09.392 00:11:09.392 ' 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:09.392 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.393 14:52:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:15.965 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:15.965 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:15.965 Found net devices under 0000:86:00.0: cvl_0_0 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:15.965 Found net devices under 0000:86:00.1: cvl_0_1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.965 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.966 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.966 14:52:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:11:15.966 00:11:15.966 --- 10.0.0.2 ping statistics --- 00:11:15.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.966 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:11:15.966 00:11:15.966 --- 10.0.0.1 ping statistics --- 00:11:15.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.966 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3033526 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3033526 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3033526 ']' 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.966 14:52:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:15.966 [2024-12-11 14:52:08.311287] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:11:15.966 [2024-12-11 14:52:08.311330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.966 [2024-12-11 14:52:08.390313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.966 [2024-12-11 14:52:08.432386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.966 [2024-12-11 14:52:08.432425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.966 [2024-12-11 14:52:08.432432] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.966 [2024-12-11 14:52:08.432438] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.966 [2024-12-11 14:52:08.432443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.966 [2024-12-11 14:52:08.433843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.966 [2024-12-11 14:52:08.433956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.966 [2024-12-11 14:52:08.434061] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.966 [2024-12-11 14:52:08.434062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 [2024-12-11 14:52:09.182445] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:16.225 [2024-12-11 14:52:09.254085] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:16.225 14:52:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:19.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:32.661 rmmod nvme_tcp 00:11:32.661 rmmod nvme_fabrics 00:11:32.661 rmmod nvme_keyring 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3033526 ']' 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3033526 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3033526 ']' 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3033526 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:32.661 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033526 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033526' 00:11:32.921 killing process with pid 3033526 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3033526 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3033526 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.921 14:52:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.459 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:35.459 00:11:35.459 real 0m26.015s 00:11:35.459 user 1m11.286s 00:11:35.459 sys 0m5.835s 00:11:35.459 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.459 14:52:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:35.459 ************************************ 00:11:35.459 END TEST nvmf_connect_disconnect 00:11:35.459 ************************************ 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:35.459 ************************************ 00:11:35.459 START TEST nvmf_multitarget 00:11:35.459 ************************************ 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:35.459 * Looking for test storage... 00:11:35.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:35.459 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.460 --rc genhtml_branch_coverage=1 00:11:35.460 --rc genhtml_function_coverage=1 00:11:35.460 --rc genhtml_legend=1 00:11:35.460 --rc geninfo_all_blocks=1 00:11:35.460 --rc geninfo_unexecuted_blocks=1 00:11:35.460 00:11:35.460 ' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.460 --rc genhtml_branch_coverage=1 00:11:35.460 --rc genhtml_function_coverage=1 00:11:35.460 --rc genhtml_legend=1 00:11:35.460 --rc geninfo_all_blocks=1 00:11:35.460 --rc geninfo_unexecuted_blocks=1 00:11:35.460 00:11:35.460 ' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.460 --rc genhtml_branch_coverage=1 00:11:35.460 --rc genhtml_function_coverage=1 00:11:35.460 --rc genhtml_legend=1 00:11:35.460 --rc geninfo_all_blocks=1 00:11:35.460 --rc geninfo_unexecuted_blocks=1 00:11:35.460 00:11:35.460 ' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:35.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:35.460 --rc genhtml_branch_coverage=1 00:11:35.460 --rc genhtml_function_coverage=1 00:11:35.460 --rc genhtml_legend=1 00:11:35.460 --rc geninfo_all_blocks=1 00:11:35.460 --rc geninfo_unexecuted_blocks=1 00:11:35.460 00:11:35.460 ' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:35.460 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:35.460 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:35.461 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:11:35.461 14:52:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:42.148 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:42.148 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.148 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:42.149 Found net devices under 0000:86:00.0: cvl_0_0 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:42.149 Found net devices under 0000:86:00.1: cvl_0_1 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.149 14:52:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:11:42.149 00:11:42.149 --- 10.0.0.2 ping statistics --- 00:11:42.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.149 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:11:42.149 00:11:42.149 --- 10.0.0.1 ping statistics --- 00:11:42.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.149 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3040091 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3040091 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3040091 ']' 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:42.149 14:52:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.149 [2024-12-11 14:52:34.365691] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:11:42.149 [2024-12-11 14:52:34.365734] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.149 [2024-12-11 14:52:34.445513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.149 [2024-12-11 14:52:34.485119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.149 [2024-12-11 14:52:34.485160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.149 [2024-12-11 14:52:34.485168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.149 [2024-12-11 14:52:34.485174] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.149 [2024-12-11 14:52:34.485179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.149 [2024-12-11 14:52:34.486609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.149 [2024-12-11 14:52:34.486716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.149 [2024-12-11 14:52:34.486826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.149 [2024-12-11 14:52:34.486835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:42.408 "nvmf_tgt_1" 00:11:42.408 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:42.666 "nvmf_tgt_2" 00:11:42.666 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:42.666 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:42.667 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:42.667 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:42.925 true 00:11:42.925 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:42.925 true 00:11:42.925 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:42.925 14:52:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.184 rmmod nvme_tcp 00:11:43.184 rmmod nvme_fabrics 00:11:43.184 rmmod nvme_keyring 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3040091 ']' 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3040091 00:11:43.184 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3040091 ']' 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3040091 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040091 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040091' 00:11:43.185 killing process with pid 3040091 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3040091 00:11:43.185 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3040091 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.444 14:52:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.351 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.351 00:11:45.351 real 0m10.331s 00:11:45.351 user 0m9.981s 00:11:45.352 sys 0m4.925s 00:11:45.352 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.352 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:45.352 ************************************ 00:11:45.352 END TEST nvmf_multitarget 00:11:45.352 ************************************ 00:11:45.611 14:52:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:45.611 14:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:45.611 14:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.611 14:52:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.611 ************************************ 00:11:45.611 START TEST nvmf_rpc 00:11:45.611 ************************************ 00:11:45.611 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:45.611 * Looking for test storage... 00:11:45.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:45.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.612 --rc genhtml_branch_coverage=1 00:11:45.612 --rc genhtml_function_coverage=1 00:11:45.612 --rc genhtml_legend=1 00:11:45.612 --rc geninfo_all_blocks=1 00:11:45.612 --rc geninfo_unexecuted_blocks=1 00:11:45.612 00:11:45.612 ' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:45.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.612 --rc genhtml_branch_coverage=1 00:11:45.612 --rc genhtml_function_coverage=1 00:11:45.612 --rc genhtml_legend=1 00:11:45.612 --rc geninfo_all_blocks=1 00:11:45.612 --rc geninfo_unexecuted_blocks=1 00:11:45.612 00:11:45.612 ' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:45.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.612 --rc genhtml_branch_coverage=1 00:11:45.612 --rc genhtml_function_coverage=1 00:11:45.612 --rc genhtml_legend=1 00:11:45.612 --rc geninfo_all_blocks=1 00:11:45.612 --rc geninfo_unexecuted_blocks=1 00:11:45.612 00:11:45.612 ' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:45.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.612 --rc genhtml_branch_coverage=1 00:11:45.612 --rc genhtml_function_coverage=1 00:11:45.612 --rc genhtml_legend=1 00:11:45.612 --rc geninfo_all_blocks=1 00:11:45.612 --rc geninfo_unexecuted_blocks=1 00:11:45.612 00:11:45.612 ' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.612 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.872 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.872 14:52:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:52.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:52.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.447 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:52.448 Found net devices under 0000:86:00.0: cvl_0_0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:52.448 Found net devices under 0000:86:00.1: cvl_0_1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:11:52.448 00:11:52.448 --- 10.0.0.2 ping statistics --- 00:11:52.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.448 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:11:52.448 00:11:52.448 --- 10.0.0.1 ping statistics --- 00:11:52.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.448 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3043940 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3043940 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3043940 ']' 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.448 [2024-12-11 14:52:44.705456] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:11:52.448 [2024-12-11 14:52:44.705497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.448 [2024-12-11 14:52:44.787550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.448 [2024-12-11 14:52:44.829094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.448 [2024-12-11 14:52:44.829129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.448 [2024-12-11 14:52:44.829140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.448 [2024-12-11 14:52:44.829146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.448 [2024-12-11 14:52:44.829151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.448 [2024-12-11 14:52:44.830585] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.448 [2024-12-11 14:52:44.830695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.448 [2024-12-11 14:52:44.830798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.448 [2024-12-11 14:52:44.830800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.448 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:52.448 "tick_rate": 2300000000, 00:11:52.448 "poll_groups": [ 00:11:52.448 { 00:11:52.448 "name": "nvmf_tgt_poll_group_000", 00:11:52.448 "admin_qpairs": 0, 00:11:52.448 "io_qpairs": 0, 00:11:52.448 "current_admin_qpairs": 0, 00:11:52.448 "current_io_qpairs": 0, 00:11:52.448 "pending_bdev_io": 0, 00:11:52.448 "completed_nvme_io": 0, 00:11:52.448 "transports": [] 00:11:52.448 }, 00:11:52.448 { 00:11:52.448 "name": "nvmf_tgt_poll_group_001", 00:11:52.448 "admin_qpairs": 0, 00:11:52.448 "io_qpairs": 0, 00:11:52.448 "current_admin_qpairs": 0, 00:11:52.448 "current_io_qpairs": 0, 00:11:52.448 "pending_bdev_io": 0, 00:11:52.448 "completed_nvme_io": 0, 00:11:52.448 "transports": [] 00:11:52.448 }, 00:11:52.448 { 00:11:52.448 "name": "nvmf_tgt_poll_group_002", 00:11:52.448 "admin_qpairs": 0, 00:11:52.448 "io_qpairs": 0, 00:11:52.448 "current_admin_qpairs": 0, 00:11:52.448 "current_io_qpairs": 0, 00:11:52.448 "pending_bdev_io": 0, 00:11:52.448 "completed_nvme_io": 0, 00:11:52.448 "transports": [] 00:11:52.448 }, 00:11:52.448 { 00:11:52.448 "name": "nvmf_tgt_poll_group_003", 00:11:52.448 "admin_qpairs": 0, 00:11:52.448 "io_qpairs": 0, 00:11:52.448 "current_admin_qpairs": 0, 00:11:52.448 "current_io_qpairs": 0, 00:11:52.448 "pending_bdev_io": 0, 00:11:52.448 "completed_nvme_io": 0, 00:11:52.448 "transports": [] 00:11:52.448 } 00:11:52.448 ] 00:11:52.448 }' 00:11:52.449 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:52.449 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:52.449 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:52.449 14:52:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 [2024-12-11 14:52:45.077504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:52.449 "tick_rate": 2300000000, 00:11:52.449 "poll_groups": [ 00:11:52.449 { 00:11:52.449 "name": "nvmf_tgt_poll_group_000", 00:11:52.449 "admin_qpairs": 0, 00:11:52.449 "io_qpairs": 0, 00:11:52.449 "current_admin_qpairs": 0, 00:11:52.449 "current_io_qpairs": 0, 00:11:52.449 "pending_bdev_io": 0, 00:11:52.449 "completed_nvme_io": 0, 00:11:52.449 "transports": [ 00:11:52.449 { 00:11:52.449 "trtype": "TCP" 00:11:52.449 } 00:11:52.449 ] 00:11:52.449 }, 00:11:52.449 { 00:11:52.449 "name": "nvmf_tgt_poll_group_001", 00:11:52.449 "admin_qpairs": 0, 00:11:52.449 "io_qpairs": 0, 00:11:52.449 "current_admin_qpairs": 0, 00:11:52.449 "current_io_qpairs": 0, 00:11:52.449 "pending_bdev_io": 0, 00:11:52.449 "completed_nvme_io": 0, 00:11:52.449 "transports": [ 00:11:52.449 { 00:11:52.449 "trtype": "TCP" 00:11:52.449 } 00:11:52.449 ] 00:11:52.449 }, 00:11:52.449 { 00:11:52.449 "name": "nvmf_tgt_poll_group_002", 00:11:52.449 "admin_qpairs": 0, 00:11:52.449 "io_qpairs": 0, 00:11:52.449 "current_admin_qpairs": 0, 00:11:52.449 "current_io_qpairs": 0, 00:11:52.449 "pending_bdev_io": 0, 00:11:52.449 "completed_nvme_io": 0, 00:11:52.449 "transports": [ 00:11:52.449 { 00:11:52.449 "trtype": "TCP" 00:11:52.449 } 00:11:52.449 ] 00:11:52.449 }, 00:11:52.449 { 00:11:52.449 "name": "nvmf_tgt_poll_group_003", 00:11:52.449 "admin_qpairs": 0, 00:11:52.449 "io_qpairs": 0, 00:11:52.449 "current_admin_qpairs": 0, 00:11:52.449 "current_io_qpairs": 0, 00:11:52.449 "pending_bdev_io": 0, 00:11:52.449 "completed_nvme_io": 0, 00:11:52.449 "transports": [ 00:11:52.449 { 00:11:52.449 "trtype": "TCP" 00:11:52.449 } 00:11:52.449 ] 00:11:52.449 } 00:11:52.449 ] 00:11:52.449 }' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 Malloc1 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 [2024-12-11 14:52:45.256964] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:52.449 [2024-12-11 14:52:45.285617] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:52.449 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:52.449 could not add new controller: failed to write to nvme-fabrics device 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.449 14:52:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:53.828 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:53.828 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:53.828 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.828 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:53.828 14:52:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:55.734 [2024-12-11 14:52:48.603827] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:55.734 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:55.734 could not add new controller: failed to write to nvme-fabrics device 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.734 14:52:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:57.112 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:57.112 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:57.112 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:57.112 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:57.112 14:52:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:59.016 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:59.016 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.017 [2024-12-11 14:52:51.952812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.017 14:52:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.395 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:00.395 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.395 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.395 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:00.395 14:52:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 [2024-12-11 14:52:55.257817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.300 14:52:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:03.678 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:03.678 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:03.678 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:03.678 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:03.678 14:52:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:05.583 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 [2024-12-11 14:52:58.757567] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.842 14:52:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.220 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.220 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:07.220 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.220 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:07.220 14:52:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:09.126 14:53:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.126 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.127 [2024-12-11 14:53:02.057908] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.127 14:53:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.505 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:10.505 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:10.505 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:10.505 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:10.505 14:53:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:12.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:12.410 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 [2024-12-11 14:53:05.413500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.411 14:53:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:13.788 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:13.788 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:13.788 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:13.788 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:13.788 14:53:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 [2024-12-11 14:53:08.720038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.693 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 [2024-12-11 14:53:08.768124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.953 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 [2024-12-11 14:53:08.816267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 [2024-12-11 14:53:08.864435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 [2024-12-11 14:53:08.912607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:15.954 "tick_rate": 2300000000, 00:12:15.954 "poll_groups": [ 00:12:15.954 { 00:12:15.954 "name": "nvmf_tgt_poll_group_000", 00:12:15.954 "admin_qpairs": 2, 00:12:15.954 "io_qpairs": 168, 00:12:15.954 "current_admin_qpairs": 0, 00:12:15.954 "current_io_qpairs": 0, 00:12:15.954 "pending_bdev_io": 0, 00:12:15.954 "completed_nvme_io": 175, 00:12:15.954 "transports": [ 00:12:15.954 { 00:12:15.954 "trtype": "TCP" 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 }, 00:12:15.954 { 00:12:15.954 "name": "nvmf_tgt_poll_group_001", 00:12:15.954 "admin_qpairs": 2, 00:12:15.954 "io_qpairs": 168, 00:12:15.954 "current_admin_qpairs": 0, 00:12:15.954 "current_io_qpairs": 0, 00:12:15.954 "pending_bdev_io": 0, 00:12:15.954 "completed_nvme_io": 218, 00:12:15.954 "transports": [ 00:12:15.954 { 00:12:15.954 "trtype": "TCP" 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 }, 00:12:15.954 { 00:12:15.954 "name": "nvmf_tgt_poll_group_002", 00:12:15.954 "admin_qpairs": 1, 00:12:15.954 "io_qpairs": 168, 00:12:15.954 "current_admin_qpairs": 0, 00:12:15.954 "current_io_qpairs": 0, 00:12:15.954 "pending_bdev_io": 0, 00:12:15.954 "completed_nvme_io": 317, 00:12:15.954 "transports": [ 00:12:15.954 { 00:12:15.954 "trtype": "TCP" 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 }, 00:12:15.954 { 00:12:15.954 "name": "nvmf_tgt_poll_group_003", 00:12:15.954 "admin_qpairs": 2, 00:12:15.954 "io_qpairs": 168, 00:12:15.954 "current_admin_qpairs": 0, 00:12:15.954 "current_io_qpairs": 0, 00:12:15.954 "pending_bdev_io": 0, 00:12:15.954 "completed_nvme_io": 312, 00:12:15.954 "transports": [ 00:12:15.954 { 00:12:15.954 "trtype": "TCP" 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 } 00:12:15.954 ] 00:12:15.954 }' 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:15.954 14:53:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.214 rmmod nvme_tcp 00:12:16.214 rmmod nvme_fabrics 00:12:16.214 rmmod nvme_keyring 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3043940 ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3043940 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3043940 ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3043940 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043940 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043940' 00:12:16.214 killing process with pid 3043940 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3043940 00:12:16.214 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3043940 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.475 14:53:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.016 00:12:19.016 real 0m32.978s 00:12:19.016 user 1m39.404s 00:12:19.016 sys 0m6.595s 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.016 ************************************ 00:12:19.016 END TEST nvmf_rpc 00:12:19.016 ************************************ 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.016 ************************************ 00:12:19.016 START TEST nvmf_invalid 00:12:19.016 ************************************ 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:19.016 * Looking for test storage... 00:12:19.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.016 --rc genhtml_branch_coverage=1 00:12:19.016 --rc genhtml_function_coverage=1 00:12:19.016 --rc genhtml_legend=1 00:12:19.016 --rc geninfo_all_blocks=1 00:12:19.016 --rc geninfo_unexecuted_blocks=1 00:12:19.016 00:12:19.016 ' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.016 --rc genhtml_branch_coverage=1 00:12:19.016 --rc genhtml_function_coverage=1 00:12:19.016 --rc genhtml_legend=1 00:12:19.016 --rc geninfo_all_blocks=1 00:12:19.016 --rc geninfo_unexecuted_blocks=1 00:12:19.016 00:12:19.016 ' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.016 --rc genhtml_branch_coverage=1 00:12:19.016 --rc genhtml_function_coverage=1 00:12:19.016 --rc genhtml_legend=1 00:12:19.016 --rc geninfo_all_blocks=1 00:12:19.016 --rc geninfo_unexecuted_blocks=1 00:12:19.016 00:12:19.016 ' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.016 --rc genhtml_branch_coverage=1 00:12:19.016 --rc genhtml_function_coverage=1 00:12:19.016 --rc genhtml_legend=1 00:12:19.016 --rc geninfo_all_blocks=1 00:12:19.016 --rc geninfo_unexecuted_blocks=1 00:12:19.016 00:12:19.016 ' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:19.016 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.017 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.017 14:53:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:25.590 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:25.591 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:25.591 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:25.591 Found net devices under 0000:86:00.0: cvl_0_0 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:25.591 Found net devices under 0000:86:00.1: cvl_0_1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:25.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:12:25.591 00:12:25.591 --- 10.0.0.2 ping statistics --- 00:12:25.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.591 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:12:25.591 00:12:25.591 --- 10.0.0.1 ping statistics --- 00:12:25.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.591 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3051562 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3051562 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3051562 ']' 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:25.591 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.591 [2024-12-11 14:53:17.722413] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:12:25.591 [2024-12-11 14:53:17.722470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.591 [2024-12-11 14:53:17.803926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.591 [2024-12-11 14:53:17.844594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.591 [2024-12-11 14:53:17.844634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.591 [2024-12-11 14:53:17.844640] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.591 [2024-12-11 14:53:17.844646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.591 [2024-12-11 14:53:17.844651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.591 [2024-12-11 14:53:17.846242] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.591 [2024-12-11 14:53:17.846358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.591 [2024-12-11 14:53:17.846467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.591 [2024-12-11 14:53:17.846468] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:25.592 14:53:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6461 00:12:25.592 [2024-12-11 14:53:18.153399] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode6461", 00:12:25.592 "tgt_name": "foobar", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32603, 00:12:25.592 "message": "Unable to find target foobar" 00:12:25.592 }' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode6461", 00:12:25.592 "tgt_name": "foobar", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32603, 00:12:25.592 "message": "Unable to find target foobar" 00:12:25.592 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16945 00:12:25.592 [2024-12-11 14:53:18.350056] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16945: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode16945", 00:12:25.592 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32602, 00:12:25.592 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:25.592 }' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode16945", 00:12:25.592 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32602, 00:12:25.592 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:25.592 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27342 00:12:25.592 [2024-12-11 14:53:18.546689] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27342: invalid model number 'SPDK_Controller' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode27342", 00:12:25.592 "model_number": "SPDK_Controller\u001f", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32602, 00:12:25.592 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.592 }' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:25.592 { 00:12:25.592 "nqn": "nqn.2016-06.io.spdk:cnode27342", 00:12:25.592 "model_number": "SPDK_Controller\u001f", 00:12:25.592 "method": "nvmf_create_subsystem", 00:12:25.592 "req_id": 1 00:12:25.592 } 00:12:25.592 Got JSON-RPC error response 00:12:25.592 response: 00:12:25.592 { 00:12:25.592 "code": -32602, 00:12:25.592 "message": "Invalid MN SPDK_Controller\u001f" 00:12:25.592 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.592 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'HVMl]p9F{~qUdcx8ck)mQ' 00:12:25.852 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s 'HVMl]p9F{~qUdcx8ck)mQ' nqn.2016-06.io.spdk:cnode16156 00:12:25.852 [2024-12-11 14:53:18.883843] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16156: invalid serial number 'HVMl]p9F{~qUdcx8ck)mQ' 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:26.112 { 00:12:26.112 "nqn": "nqn.2016-06.io.spdk:cnode16156", 00:12:26.112 "serial_number": "HVMl]p9F{~qUdcx8ck)mQ", 00:12:26.112 "method": "nvmf_create_subsystem", 00:12:26.112 "req_id": 1 00:12:26.112 } 00:12:26.112 Got JSON-RPC error response 00:12:26.112 response: 00:12:26.112 { 00:12:26.112 "code": -32602, 00:12:26.112 "message": "Invalid SN HVMl]p9F{~qUdcx8ck)mQ" 00:12:26.112 }' 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:26.112 { 00:12:26.112 "nqn": "nqn.2016-06.io.spdk:cnode16156", 00:12:26.112 "serial_number": "HVMl]p9F{~qUdcx8ck)mQ", 00:12:26.112 "method": "nvmf_create_subsystem", 00:12:26.112 "req_id": 1 00:12:26.112 } 00:12:26.112 Got JSON-RPC error response 00:12:26.112 response: 00:12:26.112 { 00:12:26.112 "code": -32602, 00:12:26.112 "message": "Invalid SN HVMl]p9F{~qUdcx8ck)mQ" 00:12:26.112 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.112 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:26.113 14:53:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:26.113 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:26.114 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=3Js)n%:?7MWm2C)>_q}OH=P._Q' 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d '=3Js)n%:?7MWm2C)>_q}OH=P._Q' nqn.2016-06.io.spdk:cnode8491 00:12:26.373 [2024-12-11 14:53:19.353409] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8491: invalid model number '=3Js)n%:?7MWm2C)>_q}OH=P._Q' 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:26.373 { 00:12:26.373 "nqn": "nqn.2016-06.io.spdk:cnode8491", 00:12:26.373 "model_number": "=3Js)n%:?7MWm2C)>_q}OH=P._Q", 00:12:26.373 "method": "nvmf_create_subsystem", 00:12:26.373 "req_id": 1 00:12:26.373 } 00:12:26.373 Got JSON-RPC error response 00:12:26.373 response: 00:12:26.373 { 00:12:26.373 "code": -32602, 00:12:26.373 "message": "Invalid MN =3Js)n%:?7MWm2C)>_q}OH=P._Q" 00:12:26.373 }' 00:12:26.373 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:26.373 { 00:12:26.373 "nqn": "nqn.2016-06.io.spdk:cnode8491", 00:12:26.373 "model_number": "=3Js)n%:?7MWm2C)>_q}OH=P._Q", 00:12:26.373 "method": "nvmf_create_subsystem", 00:12:26.373 "req_id": 1 00:12:26.373 } 00:12:26.373 Got JSON-RPC error response 00:12:26.373 response: 00:12:26.373 { 00:12:26.373 "code": -32602, 00:12:26.374 "message": "Invalid MN =3Js)n%:?7MWm2C)>_q}OH=P._Q" 00:12:26.374 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:26.374 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:26.633 [2024-12-11 14:53:19.558178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.633 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:26.891 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:26.891 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:26.891 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:26.891 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:26.891 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:27.150 [2024-12-11 14:53:19.963502] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:27.150 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:27.150 { 00:12:27.150 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:27.150 "listen_address": { 00:12:27.150 "trtype": "tcp", 00:12:27.150 "traddr": "", 00:12:27.150 "trsvcid": "4421" 00:12:27.150 }, 00:12:27.150 "method": "nvmf_subsystem_remove_listener", 00:12:27.150 "req_id": 1 00:12:27.150 } 00:12:27.150 Got JSON-RPC error response 00:12:27.150 response: 00:12:27.150 { 00:12:27.150 "code": -32602, 00:12:27.150 "message": "Invalid parameters" 00:12:27.150 }' 00:12:27.150 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:27.150 { 00:12:27.150 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:27.150 "listen_address": { 00:12:27.150 "trtype": "tcp", 00:12:27.150 "traddr": "", 00:12:27.150 "trsvcid": "4421" 00:12:27.150 }, 00:12:27.150 "method": "nvmf_subsystem_remove_listener", 00:12:27.150 "req_id": 1 00:12:27.150 } 00:12:27.150 Got JSON-RPC error response 00:12:27.150 response: 00:12:27.150 { 00:12:27.150 "code": -32602, 00:12:27.150 "message": "Invalid parameters" 00:12:27.150 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:27.150 14:53:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6414 -i 0 00:12:27.150 [2024-12-11 14:53:20.164144] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6414: invalid cntlid range [0-65519] 00:12:27.150 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:12:27.150 { 00:12:27.150 "nqn": "nqn.2016-06.io.spdk:cnode6414", 00:12:27.150 "min_cntlid": 0, 00:12:27.150 "method": "nvmf_create_subsystem", 00:12:27.150 "req_id": 1 00:12:27.150 } 00:12:27.150 Got JSON-RPC error response 00:12:27.150 response: 00:12:27.150 { 00:12:27.150 "code": -32602, 00:12:27.150 "message": "Invalid cntlid range [0-65519]" 00:12:27.150 }' 00:12:27.150 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:12:27.150 { 00:12:27.150 "nqn": "nqn.2016-06.io.spdk:cnode6414", 00:12:27.150 "min_cntlid": 0, 00:12:27.150 "method": "nvmf_create_subsystem", 00:12:27.150 "req_id": 1 00:12:27.150 } 00:12:27.150 Got JSON-RPC error response 00:12:27.150 response: 00:12:27.150 { 00:12:27.150 "code": -32602, 00:12:27.150 "message": "Invalid cntlid range [0-65519]" 00:12:27.150 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.150 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7139 -i 65520 00:12:27.409 [2024-12-11 14:53:20.380898] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7139: invalid cntlid range [65520-65519] 00:12:27.409 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:12:27.409 { 00:12:27.410 "nqn": "nqn.2016-06.io.spdk:cnode7139", 00:12:27.410 "min_cntlid": 65520, 00:12:27.410 "method": "nvmf_create_subsystem", 00:12:27.410 "req_id": 1 00:12:27.410 } 00:12:27.410 Got JSON-RPC error response 00:12:27.410 response: 00:12:27.410 { 00:12:27.410 "code": -32602, 00:12:27.410 "message": "Invalid cntlid range [65520-65519]" 00:12:27.410 }' 00:12:27.410 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:12:27.410 { 00:12:27.410 "nqn": "nqn.2016-06.io.spdk:cnode7139", 00:12:27.410 "min_cntlid": 65520, 00:12:27.410 "method": "nvmf_create_subsystem", 00:12:27.410 "req_id": 1 00:12:27.410 } 00:12:27.410 Got JSON-RPC error response 00:12:27.410 response: 00:12:27.410 { 00:12:27.410 "code": -32602, 00:12:27.410 "message": "Invalid cntlid range [65520-65519]" 00:12:27.410 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.410 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11434 -I 0 00:12:27.669 [2024-12-11 14:53:20.597617] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11434: invalid cntlid range [1-0] 00:12:27.669 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:12:27.669 { 00:12:27.669 "nqn": "nqn.2016-06.io.spdk:cnode11434", 00:12:27.669 "max_cntlid": 0, 00:12:27.669 "method": "nvmf_create_subsystem", 00:12:27.669 "req_id": 1 00:12:27.669 } 00:12:27.669 Got JSON-RPC error response 00:12:27.669 response: 00:12:27.669 { 00:12:27.669 "code": -32602, 00:12:27.669 "message": "Invalid cntlid range [1-0]" 00:12:27.669 }' 00:12:27.669 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:12:27.669 { 00:12:27.669 "nqn": "nqn.2016-06.io.spdk:cnode11434", 00:12:27.669 "max_cntlid": 0, 00:12:27.669 "method": "nvmf_create_subsystem", 00:12:27.669 "req_id": 1 00:12:27.669 } 00:12:27.669 Got JSON-RPC error response 00:12:27.669 response: 00:12:27.669 { 00:12:27.669 "code": -32602, 00:12:27.669 "message": "Invalid cntlid range [1-0]" 00:12:27.669 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.669 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32049 -I 65520 00:12:27.928 [2024-12-11 14:53:20.810342] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32049: invalid cntlid range [1-65520] 00:12:27.928 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:12:27.928 { 00:12:27.928 "nqn": "nqn.2016-06.io.spdk:cnode32049", 00:12:27.928 "max_cntlid": 65520, 00:12:27.928 "method": "nvmf_create_subsystem", 00:12:27.928 "req_id": 1 00:12:27.928 } 00:12:27.928 Got JSON-RPC error response 00:12:27.928 response: 00:12:27.928 { 00:12:27.928 "code": -32602, 00:12:27.928 "message": "Invalid cntlid range [1-65520]" 00:12:27.928 }' 00:12:27.928 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:12:27.928 { 00:12:27.928 "nqn": "nqn.2016-06.io.spdk:cnode32049", 00:12:27.928 "max_cntlid": 65520, 00:12:27.928 "method": "nvmf_create_subsystem", 00:12:27.928 "req_id": 1 00:12:27.928 } 00:12:27.928 Got JSON-RPC error response 00:12:27.928 response: 00:12:27.928 { 00:12:27.928 "code": -32602, 00:12:27.928 "message": "Invalid cntlid range [1-65520]" 00:12:27.928 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:27.928 14:53:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14742 -i 6 -I 5 00:12:28.187 [2024-12-11 14:53:21.015067] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14742: invalid cntlid range [6-5] 00:12:28.187 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:12:28.187 { 00:12:28.187 "nqn": "nqn.2016-06.io.spdk:cnode14742", 00:12:28.187 "min_cntlid": 6, 00:12:28.187 "max_cntlid": 5, 00:12:28.187 "method": "nvmf_create_subsystem", 00:12:28.187 "req_id": 1 00:12:28.187 } 00:12:28.187 Got JSON-RPC error response 00:12:28.187 response: 00:12:28.187 { 00:12:28.187 "code": -32602, 00:12:28.187 "message": "Invalid cntlid range [6-5]" 00:12:28.187 }' 00:12:28.187 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:12:28.187 { 00:12:28.187 "nqn": "nqn.2016-06.io.spdk:cnode14742", 00:12:28.188 "min_cntlid": 6, 00:12:28.188 "max_cntlid": 5, 00:12:28.188 "method": "nvmf_create_subsystem", 00:12:28.188 "req_id": 1 00:12:28.188 } 00:12:28.188 Got JSON-RPC error response 00:12:28.188 response: 00:12:28.188 { 00:12:28.188 "code": -32602, 00:12:28.188 "message": "Invalid cntlid range [6-5]" 00:12:28.188 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:12:28.188 { 00:12:28.188 "name": "foobar", 00:12:28.188 "method": "nvmf_delete_target", 00:12:28.188 "req_id": 1 00:12:28.188 } 00:12:28.188 Got JSON-RPC error response 00:12:28.188 response: 00:12:28.188 { 00:12:28.188 "code": -32602, 00:12:28.188 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:28.188 }' 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:12:28.188 { 00:12:28.188 "name": "foobar", 00:12:28.188 "method": "nvmf_delete_target", 00:12:28.188 "req_id": 1 00:12:28.188 } 00:12:28.188 Got JSON-RPC error response 00:12:28.188 response: 00:12:28.188 { 00:12:28.188 "code": -32602, 00:12:28.188 "message": "The specified target doesn't exist, cannot delete it." 00:12:28.188 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:28.188 rmmod nvme_tcp 00:12:28.188 rmmod nvme_fabrics 00:12:28.188 rmmod nvme_keyring 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3051562 ']' 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3051562 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3051562 ']' 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3051562 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.188 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051562 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051562' 00:12:28.448 killing process with pid 3051562 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3051562 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3051562 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.448 14:53:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.987 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.988 00:12:30.988 real 0m11.988s 00:12:30.988 user 0m18.648s 00:12:30.988 sys 0m5.300s 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.988 ************************************ 00:12:30.988 END TEST nvmf_invalid 00:12:30.988 ************************************ 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.988 ************************************ 00:12:30.988 START TEST nvmf_connect_stress 00:12:30.988 ************************************ 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:30.988 * Looking for test storage... 00:12:30.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.988 --rc genhtml_branch_coverage=1 00:12:30.988 --rc genhtml_function_coverage=1 00:12:30.988 --rc genhtml_legend=1 00:12:30.988 --rc geninfo_all_blocks=1 00:12:30.988 --rc geninfo_unexecuted_blocks=1 00:12:30.988 00:12:30.988 ' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.988 --rc genhtml_branch_coverage=1 00:12:30.988 --rc genhtml_function_coverage=1 00:12:30.988 --rc genhtml_legend=1 00:12:30.988 --rc geninfo_all_blocks=1 00:12:30.988 --rc geninfo_unexecuted_blocks=1 00:12:30.988 00:12:30.988 ' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.988 --rc genhtml_branch_coverage=1 00:12:30.988 --rc genhtml_function_coverage=1 00:12:30.988 --rc genhtml_legend=1 00:12:30.988 --rc geninfo_all_blocks=1 00:12:30.988 --rc geninfo_unexecuted_blocks=1 00:12:30.988 00:12:30.988 ' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.988 --rc genhtml_branch_coverage=1 00:12:30.988 --rc genhtml_function_coverage=1 00:12:30.988 --rc genhtml_legend=1 00:12:30.988 --rc geninfo_all_blocks=1 00:12:30.988 --rc geninfo_unexecuted_blocks=1 00:12:30.988 00:12:30.988 ' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.988 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.989 14:53:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.562 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.562 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.562 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:12:37.563 00:12:37.563 --- 10.0.0.2 ping statistics --- 00:12:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.563 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:12:37.563 00:12:37.563 --- 10.0.0.1 ping statistics --- 00:12:37.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.563 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3055935 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3055935 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3055935 ']' 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 [2024-12-11 14:53:29.778429] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:12:37.563 [2024-12-11 14:53:29.778481] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.563 [2024-12-11 14:53:29.858813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:37.563 [2024-12-11 14:53:29.900151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.563 [2024-12-11 14:53:29.900187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.563 [2024-12-11 14:53:29.900194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.563 [2024-12-11 14:53:29.900200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.563 [2024-12-11 14:53:29.900205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.563 [2024-12-11 14:53:29.901632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.563 [2024-12-11 14:53:29.901736] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.563 [2024-12-11 14:53:29.901737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.563 14:53:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 [2024-12-11 14:53:30.039120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 [2024-12-11 14:53:30.063410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 NULL1 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3055958 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.563 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.564 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:37.823 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.823 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:37.823 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.823 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.823 14:53:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.391 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.391 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:38.391 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.391 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.391 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.650 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.650 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:38.650 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.650 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.650 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:38.958 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.958 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:38.958 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.958 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.958 14:53:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.279 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.279 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:39.279 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.279 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.279 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.583 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.583 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:39.583 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.583 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.583 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:39.841 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.841 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:39.841 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.842 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.842 14:53:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.100 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.100 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:40.100 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.100 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.100 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.669 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.669 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:40.669 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.669 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.669 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:40.927 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.927 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:40.927 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.927 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.927 14:53:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.186 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.186 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:41.186 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.186 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.186 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.445 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.445 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:41.445 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.445 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.445 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:41.703 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.703 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:41.703 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.703 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.703 14:53:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.268 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.268 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:42.268 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.268 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.268 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.527 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.527 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:42.527 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.527 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.527 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:42.785 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.785 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:42.785 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.785 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.785 14:53:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.043 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.043 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:43.043 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.043 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.043 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.302 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.302 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:43.302 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.302 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.302 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:43.869 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.870 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:43.870 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.870 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.870 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.129 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.129 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:44.129 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.129 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.129 14:53:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.388 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.388 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:44.388 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.388 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.388 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:44.647 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.647 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:44.647 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.647 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.647 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.215 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.215 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:45.215 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.215 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.215 14:53:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.474 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.474 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:45.474 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.474 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.474 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.733 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.733 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:45.733 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.733 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.733 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:45.992 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.992 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:45.992 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.992 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.992 14:53:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.251 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.251 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:46.251 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.251 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.251 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:46.819 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.819 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:46.819 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.819 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.819 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.077 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.077 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:47.077 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.077 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.077 14:53:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.336 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3055958 00:12:47.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3055958) - No such process 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3055958 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.336 rmmod nvme_tcp 00:12:47.336 rmmod nvme_fabrics 00:12:47.336 rmmod nvme_keyring 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3055935 ']' 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3055935 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3055935 ']' 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3055935 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.336 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3055935 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3055935' 00:12:47.595 killing process with pid 3055935 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3055935 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3055935 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.595 14:53:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:50.133 00:12:50.133 real 0m19.044s 00:12:50.133 user 0m39.535s 00:12:50.133 sys 0m8.466s 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:50.133 ************************************ 00:12:50.133 END TEST nvmf_connect_stress 00:12:50.133 ************************************ 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.133 ************************************ 00:12:50.133 START TEST nvmf_fused_ordering 00:12:50.133 ************************************ 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:50.133 * Looking for test storage... 00:12:50.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:50.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.133 --rc genhtml_branch_coverage=1 00:12:50.133 --rc genhtml_function_coverage=1 00:12:50.133 --rc genhtml_legend=1 00:12:50.133 --rc geninfo_all_blocks=1 00:12:50.133 --rc geninfo_unexecuted_blocks=1 00:12:50.133 00:12:50.133 ' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:50.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.133 --rc genhtml_branch_coverage=1 00:12:50.133 --rc genhtml_function_coverage=1 00:12:50.133 --rc genhtml_legend=1 00:12:50.133 --rc geninfo_all_blocks=1 00:12:50.133 --rc geninfo_unexecuted_blocks=1 00:12:50.133 00:12:50.133 ' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:50.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.133 --rc genhtml_branch_coverage=1 00:12:50.133 --rc genhtml_function_coverage=1 00:12:50.133 --rc genhtml_legend=1 00:12:50.133 --rc geninfo_all_blocks=1 00:12:50.133 --rc geninfo_unexecuted_blocks=1 00:12:50.133 00:12:50.133 ' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:50.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:50.133 --rc genhtml_branch_coverage=1 00:12:50.133 --rc genhtml_function_coverage=1 00:12:50.133 --rc genhtml_legend=1 00:12:50.133 --rc geninfo_all_blocks=1 00:12:50.133 --rc geninfo_unexecuted_blocks=1 00:12:50.133 00:12:50.133 ' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.133 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.134 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:12:50.134 14:53:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:56.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:56.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.710 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:56.711 Found net devices under 0000:86:00.0: cvl_0_0 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:56.711 Found net devices under 0000:86:00.1: cvl_0_1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:56.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:12:56.711 00:12:56.711 --- 10.0.0.2 ping statistics --- 00:12:56.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.711 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:56.711 00:12:56.711 --- 10.0.0.1 ping statistics --- 00:12:56.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.711 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3061264 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3061264 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3061264 ']' 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.711 14:53:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.711 [2024-12-11 14:53:48.887682] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:12:56.711 [2024-12-11 14:53:48.887736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.711 [2024-12-11 14:53:48.972842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.711 [2024-12-11 14:53:49.012018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.711 [2024-12-11 14:53:49.012054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.711 [2024-12-11 14:53:49.012061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:56.711 [2024-12-11 14:53:49.012067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:56.711 [2024-12-11 14:53:49.012072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.711 [2024-12-11 14:53:49.012643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.711 [2024-12-11 14:53:49.155375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:56.711 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.712 [2024-12-11 14:53:49.171568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.712 NULL1 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.712 14:53:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:56.712 [2024-12-11 14:53:49.224561] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:12:56.712 [2024-12-11 14:53:49.224593] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061363 ] 00:12:56.712 Attached to nqn.2016-06.io.spdk:cnode1 00:12:56.712 Namespace ID: 1 size: 1GB 00:12:56.712 fused_ordering(0) 00:12:56.712 fused_ordering(1) 00:12:56.712 fused_ordering(2) 00:12:56.712 fused_ordering(3) 00:12:56.712 fused_ordering(4) 00:12:56.712 fused_ordering(5) 00:12:56.712 fused_ordering(6) 00:12:56.712 fused_ordering(7) 00:12:56.712 fused_ordering(8) 00:12:56.712 fused_ordering(9) 00:12:56.712 fused_ordering(10) 00:12:56.712 fused_ordering(11) 00:12:56.712 fused_ordering(12) 00:12:56.712 fused_ordering(13) 00:12:56.712 fused_ordering(14) 00:12:56.712 fused_ordering(15) 00:12:56.712 fused_ordering(16) 00:12:56.712 fused_ordering(17) 00:12:56.712 fused_ordering(18) 00:12:56.712 fused_ordering(19) 00:12:56.712 fused_ordering(20) 00:12:56.712 fused_ordering(21) 00:12:56.712 fused_ordering(22) 00:12:56.712 fused_ordering(23) 00:12:56.712 fused_ordering(24) 00:12:56.712 fused_ordering(25) 00:12:56.712 fused_ordering(26) 00:12:56.712 fused_ordering(27) 00:12:56.712 fused_ordering(28) 00:12:56.712 fused_ordering(29) 00:12:56.712 fused_ordering(30) 00:12:56.712 fused_ordering(31) 00:12:56.712 fused_ordering(32) 00:12:56.712 fused_ordering(33) 00:12:56.712 fused_ordering(34) 00:12:56.712 fused_ordering(35) 00:12:56.712 fused_ordering(36) 00:12:56.712 fused_ordering(37) 00:12:56.712 fused_ordering(38) 00:12:56.712 fused_ordering(39) 00:12:56.712 fused_ordering(40) 00:12:56.712 fused_ordering(41) 00:12:56.712 fused_ordering(42) 00:12:56.712 fused_ordering(43) 00:12:56.712 fused_ordering(44) 00:12:56.712 fused_ordering(45) 00:12:56.712 fused_ordering(46) 00:12:56.712 fused_ordering(47) 00:12:56.712 fused_ordering(48) 00:12:56.712 fused_ordering(49) 00:12:56.712 fused_ordering(50) 00:12:56.712 fused_ordering(51) 00:12:56.712 fused_ordering(52) 00:12:56.712 fused_ordering(53) 00:12:56.712 fused_ordering(54) 00:12:56.712 fused_ordering(55) 00:12:56.712 fused_ordering(56) 00:12:56.712 fused_ordering(57) 00:12:56.712 fused_ordering(58) 00:12:56.712 fused_ordering(59) 00:12:56.712 fused_ordering(60) 00:12:56.712 fused_ordering(61) 00:12:56.712 fused_ordering(62) 00:12:56.712 fused_ordering(63) 00:12:56.712 fused_ordering(64) 00:12:56.712 fused_ordering(65) 00:12:56.712 fused_ordering(66) 00:12:56.712 fused_ordering(67) 00:12:56.712 fused_ordering(68) 00:12:56.712 fused_ordering(69) 00:12:56.712 fused_ordering(70) 00:12:56.712 fused_ordering(71) 00:12:56.712 fused_ordering(72) 00:12:56.712 fused_ordering(73) 00:12:56.712 fused_ordering(74) 00:12:56.712 fused_ordering(75) 00:12:56.712 fused_ordering(76) 00:12:56.712 fused_ordering(77) 00:12:56.712 fused_ordering(78) 00:12:56.712 fused_ordering(79) 00:12:56.712 fused_ordering(80) 00:12:56.712 fused_ordering(81) 00:12:56.712 fused_ordering(82) 00:12:56.712 fused_ordering(83) 00:12:56.712 fused_ordering(84) 00:12:56.712 fused_ordering(85) 00:12:56.712 fused_ordering(86) 00:12:56.712 fused_ordering(87) 00:12:56.712 fused_ordering(88) 00:12:56.712 fused_ordering(89) 00:12:56.712 fused_ordering(90) 00:12:56.712 fused_ordering(91) 00:12:56.712 fused_ordering(92) 00:12:56.712 fused_ordering(93) 00:12:56.712 fused_ordering(94) 00:12:56.712 fused_ordering(95) 00:12:56.712 fused_ordering(96) 00:12:56.712 fused_ordering(97) 00:12:56.712 fused_ordering(98) 00:12:56.712 fused_ordering(99) 00:12:56.712 fused_ordering(100) 00:12:56.712 fused_ordering(101) 00:12:56.712 fused_ordering(102) 00:12:56.712 fused_ordering(103) 00:12:56.712 fused_ordering(104) 00:12:56.712 fused_ordering(105) 00:12:56.712 fused_ordering(106) 00:12:56.712 fused_ordering(107) 00:12:56.712 fused_ordering(108) 00:12:56.712 fused_ordering(109) 00:12:56.712 fused_ordering(110) 00:12:56.712 fused_ordering(111) 00:12:56.712 fused_ordering(112) 00:12:56.712 fused_ordering(113) 00:12:56.712 fused_ordering(114) 00:12:56.712 fused_ordering(115) 00:12:56.712 fused_ordering(116) 00:12:56.712 fused_ordering(117) 00:12:56.712 fused_ordering(118) 00:12:56.712 fused_ordering(119) 00:12:56.712 fused_ordering(120) 00:12:56.712 fused_ordering(121) 00:12:56.712 fused_ordering(122) 00:12:56.712 fused_ordering(123) 00:12:56.712 fused_ordering(124) 00:12:56.712 fused_ordering(125) 00:12:56.712 fused_ordering(126) 00:12:56.712 fused_ordering(127) 00:12:56.712 fused_ordering(128) 00:12:56.712 fused_ordering(129) 00:12:56.712 fused_ordering(130) 00:12:56.712 fused_ordering(131) 00:12:56.712 fused_ordering(132) 00:12:56.712 fused_ordering(133) 00:12:56.712 fused_ordering(134) 00:12:56.712 fused_ordering(135) 00:12:56.712 fused_ordering(136) 00:12:56.712 fused_ordering(137) 00:12:56.712 fused_ordering(138) 00:12:56.712 fused_ordering(139) 00:12:56.712 fused_ordering(140) 00:12:56.712 fused_ordering(141) 00:12:56.712 fused_ordering(142) 00:12:56.712 fused_ordering(143) 00:12:56.712 fused_ordering(144) 00:12:56.712 fused_ordering(145) 00:12:56.712 fused_ordering(146) 00:12:56.712 fused_ordering(147) 00:12:56.712 fused_ordering(148) 00:12:56.712 fused_ordering(149) 00:12:56.712 fused_ordering(150) 00:12:56.712 fused_ordering(151) 00:12:56.712 fused_ordering(152) 00:12:56.712 fused_ordering(153) 00:12:56.712 fused_ordering(154) 00:12:56.712 fused_ordering(155) 00:12:56.712 fused_ordering(156) 00:12:56.712 fused_ordering(157) 00:12:56.712 fused_ordering(158) 00:12:56.712 fused_ordering(159) 00:12:56.712 fused_ordering(160) 00:12:56.713 fused_ordering(161) 00:12:56.713 fused_ordering(162) 00:12:56.713 fused_ordering(163) 00:12:56.713 fused_ordering(164) 00:12:56.713 fused_ordering(165) 00:12:56.713 fused_ordering(166) 00:12:56.713 fused_ordering(167) 00:12:56.713 fused_ordering(168) 00:12:56.713 fused_ordering(169) 00:12:56.713 fused_ordering(170) 00:12:56.713 fused_ordering(171) 00:12:56.713 fused_ordering(172) 00:12:56.713 fused_ordering(173) 00:12:56.713 fused_ordering(174) 00:12:56.713 fused_ordering(175) 00:12:56.713 fused_ordering(176) 00:12:56.713 fused_ordering(177) 00:12:56.713 fused_ordering(178) 00:12:56.713 fused_ordering(179) 00:12:56.713 fused_ordering(180) 00:12:56.713 fused_ordering(181) 00:12:56.713 fused_ordering(182) 00:12:56.713 fused_ordering(183) 00:12:56.713 fused_ordering(184) 00:12:56.713 fused_ordering(185) 00:12:56.713 fused_ordering(186) 00:12:56.713 fused_ordering(187) 00:12:56.713 fused_ordering(188) 00:12:56.713 fused_ordering(189) 00:12:56.713 fused_ordering(190) 00:12:56.713 fused_ordering(191) 00:12:56.713 fused_ordering(192) 00:12:56.713 fused_ordering(193) 00:12:56.713 fused_ordering(194) 00:12:56.713 fused_ordering(195) 00:12:56.713 fused_ordering(196) 00:12:56.713 fused_ordering(197) 00:12:56.713 fused_ordering(198) 00:12:56.713 fused_ordering(199) 00:12:56.713 fused_ordering(200) 00:12:56.713 fused_ordering(201) 00:12:56.713 fused_ordering(202) 00:12:56.713 fused_ordering(203) 00:12:56.713 fused_ordering(204) 00:12:56.713 fused_ordering(205) 00:12:56.972 fused_ordering(206) 00:12:56.972 fused_ordering(207) 00:12:56.972 fused_ordering(208) 00:12:56.972 fused_ordering(209) 00:12:56.972 fused_ordering(210) 00:12:56.972 fused_ordering(211) 00:12:56.972 fused_ordering(212) 00:12:56.972 fused_ordering(213) 00:12:56.972 fused_ordering(214) 00:12:56.972 fused_ordering(215) 00:12:56.972 fused_ordering(216) 00:12:56.972 fused_ordering(217) 00:12:56.972 fused_ordering(218) 00:12:56.972 fused_ordering(219) 00:12:56.972 fused_ordering(220) 00:12:56.972 fused_ordering(221) 00:12:56.972 fused_ordering(222) 00:12:56.972 fused_ordering(223) 00:12:56.972 fused_ordering(224) 00:12:56.972 fused_ordering(225) 00:12:56.972 fused_ordering(226) 00:12:56.972 fused_ordering(227) 00:12:56.972 fused_ordering(228) 00:12:56.972 fused_ordering(229) 00:12:56.972 fused_ordering(230) 00:12:56.972 fused_ordering(231) 00:12:56.972 fused_ordering(232) 00:12:56.972 fused_ordering(233) 00:12:56.972 fused_ordering(234) 00:12:56.972 fused_ordering(235) 00:12:56.972 fused_ordering(236) 00:12:56.972 fused_ordering(237) 00:12:56.972 fused_ordering(238) 00:12:56.972 fused_ordering(239) 00:12:56.972 fused_ordering(240) 00:12:56.972 fused_ordering(241) 00:12:56.972 fused_ordering(242) 00:12:56.973 fused_ordering(243) 00:12:56.973 fused_ordering(244) 00:12:56.973 fused_ordering(245) 00:12:56.973 fused_ordering(246) 00:12:56.973 fused_ordering(247) 00:12:56.973 fused_ordering(248) 00:12:56.973 fused_ordering(249) 00:12:56.973 fused_ordering(250) 00:12:56.973 fused_ordering(251) 00:12:56.973 fused_ordering(252) 00:12:56.973 fused_ordering(253) 00:12:56.973 fused_ordering(254) 00:12:56.973 fused_ordering(255) 00:12:56.973 fused_ordering(256) 00:12:56.973 fused_ordering(257) 00:12:56.973 fused_ordering(258) 00:12:56.973 fused_ordering(259) 00:12:56.973 fused_ordering(260) 00:12:56.973 fused_ordering(261) 00:12:56.973 fused_ordering(262) 00:12:56.973 fused_ordering(263) 00:12:56.973 fused_ordering(264) 00:12:56.973 fused_ordering(265) 00:12:56.973 fused_ordering(266) 00:12:56.973 fused_ordering(267) 00:12:56.973 fused_ordering(268) 00:12:56.973 fused_ordering(269) 00:12:56.973 fused_ordering(270) 00:12:56.973 fused_ordering(271) 00:12:56.973 fused_ordering(272) 00:12:56.973 fused_ordering(273) 00:12:56.973 fused_ordering(274) 00:12:56.973 fused_ordering(275) 00:12:56.973 fused_ordering(276) 00:12:56.973 fused_ordering(277) 00:12:56.973 fused_ordering(278) 00:12:56.973 fused_ordering(279) 00:12:56.973 fused_ordering(280) 00:12:56.973 fused_ordering(281) 00:12:56.973 fused_ordering(282) 00:12:56.973 fused_ordering(283) 00:12:56.973 fused_ordering(284) 00:12:56.973 fused_ordering(285) 00:12:56.973 fused_ordering(286) 00:12:56.973 fused_ordering(287) 00:12:56.973 fused_ordering(288) 00:12:56.973 fused_ordering(289) 00:12:56.973 fused_ordering(290) 00:12:56.973 fused_ordering(291) 00:12:56.973 fused_ordering(292) 00:12:56.973 fused_ordering(293) 00:12:56.973 fused_ordering(294) 00:12:56.973 fused_ordering(295) 00:12:56.973 fused_ordering(296) 00:12:56.973 fused_ordering(297) 00:12:56.973 fused_ordering(298) 00:12:56.973 fused_ordering(299) 00:12:56.973 fused_ordering(300) 00:12:56.973 fused_ordering(301) 00:12:56.973 fused_ordering(302) 00:12:56.973 fused_ordering(303) 00:12:56.973 fused_ordering(304) 00:12:56.973 fused_ordering(305) 00:12:56.973 fused_ordering(306) 00:12:56.973 fused_ordering(307) 00:12:56.973 fused_ordering(308) 00:12:56.973 fused_ordering(309) 00:12:56.973 fused_ordering(310) 00:12:56.973 fused_ordering(311) 00:12:56.973 fused_ordering(312) 00:12:56.973 fused_ordering(313) 00:12:56.973 fused_ordering(314) 00:12:56.973 fused_ordering(315) 00:12:56.973 fused_ordering(316) 00:12:56.973 fused_ordering(317) 00:12:56.973 fused_ordering(318) 00:12:56.973 fused_ordering(319) 00:12:56.973 fused_ordering(320) 00:12:56.973 fused_ordering(321) 00:12:56.973 fused_ordering(322) 00:12:56.973 fused_ordering(323) 00:12:56.973 fused_ordering(324) 00:12:56.973 fused_ordering(325) 00:12:56.973 fused_ordering(326) 00:12:56.973 fused_ordering(327) 00:12:56.973 fused_ordering(328) 00:12:56.973 fused_ordering(329) 00:12:56.973 fused_ordering(330) 00:12:56.973 fused_ordering(331) 00:12:56.973 fused_ordering(332) 00:12:56.973 fused_ordering(333) 00:12:56.973 fused_ordering(334) 00:12:56.973 fused_ordering(335) 00:12:56.973 fused_ordering(336) 00:12:56.973 fused_ordering(337) 00:12:56.973 fused_ordering(338) 00:12:56.973 fused_ordering(339) 00:12:56.973 fused_ordering(340) 00:12:56.973 fused_ordering(341) 00:12:56.973 fused_ordering(342) 00:12:56.973 fused_ordering(343) 00:12:56.973 fused_ordering(344) 00:12:56.973 fused_ordering(345) 00:12:56.973 fused_ordering(346) 00:12:56.973 fused_ordering(347) 00:12:56.973 fused_ordering(348) 00:12:56.973 fused_ordering(349) 00:12:56.973 fused_ordering(350) 00:12:56.973 fused_ordering(351) 00:12:56.973 fused_ordering(352) 00:12:56.973 fused_ordering(353) 00:12:56.973 fused_ordering(354) 00:12:56.973 fused_ordering(355) 00:12:56.973 fused_ordering(356) 00:12:56.973 fused_ordering(357) 00:12:56.973 fused_ordering(358) 00:12:56.973 fused_ordering(359) 00:12:56.973 fused_ordering(360) 00:12:56.973 fused_ordering(361) 00:12:56.973 fused_ordering(362) 00:12:56.973 fused_ordering(363) 00:12:56.973 fused_ordering(364) 00:12:56.973 fused_ordering(365) 00:12:56.973 fused_ordering(366) 00:12:56.973 fused_ordering(367) 00:12:56.973 fused_ordering(368) 00:12:56.973 fused_ordering(369) 00:12:56.973 fused_ordering(370) 00:12:56.973 fused_ordering(371) 00:12:56.973 fused_ordering(372) 00:12:56.973 fused_ordering(373) 00:12:56.973 fused_ordering(374) 00:12:56.973 fused_ordering(375) 00:12:56.973 fused_ordering(376) 00:12:56.973 fused_ordering(377) 00:12:56.973 fused_ordering(378) 00:12:56.973 fused_ordering(379) 00:12:56.973 fused_ordering(380) 00:12:56.973 fused_ordering(381) 00:12:56.973 fused_ordering(382) 00:12:56.973 fused_ordering(383) 00:12:56.973 fused_ordering(384) 00:12:56.973 fused_ordering(385) 00:12:56.973 fused_ordering(386) 00:12:56.973 fused_ordering(387) 00:12:56.973 fused_ordering(388) 00:12:56.973 fused_ordering(389) 00:12:56.973 fused_ordering(390) 00:12:56.973 fused_ordering(391) 00:12:56.973 fused_ordering(392) 00:12:56.973 fused_ordering(393) 00:12:56.973 fused_ordering(394) 00:12:56.973 fused_ordering(395) 00:12:56.973 fused_ordering(396) 00:12:56.973 fused_ordering(397) 00:12:56.973 fused_ordering(398) 00:12:56.973 fused_ordering(399) 00:12:56.973 fused_ordering(400) 00:12:56.973 fused_ordering(401) 00:12:56.973 fused_ordering(402) 00:12:56.973 fused_ordering(403) 00:12:56.973 fused_ordering(404) 00:12:56.973 fused_ordering(405) 00:12:56.973 fused_ordering(406) 00:12:56.973 fused_ordering(407) 00:12:56.973 fused_ordering(408) 00:12:56.973 fused_ordering(409) 00:12:56.973 fused_ordering(410) 00:12:57.233 fused_ordering(411) 00:12:57.233 fused_ordering(412) 00:12:57.233 fused_ordering(413) 00:12:57.233 fused_ordering(414) 00:12:57.233 fused_ordering(415) 00:12:57.233 fused_ordering(416) 00:12:57.233 fused_ordering(417) 00:12:57.233 fused_ordering(418) 00:12:57.233 fused_ordering(419) 00:12:57.233 fused_ordering(420) 00:12:57.233 fused_ordering(421) 00:12:57.233 fused_ordering(422) 00:12:57.233 fused_ordering(423) 00:12:57.233 fused_ordering(424) 00:12:57.233 fused_ordering(425) 00:12:57.233 fused_ordering(426) 00:12:57.233 fused_ordering(427) 00:12:57.233 fused_ordering(428) 00:12:57.233 fused_ordering(429) 00:12:57.233 fused_ordering(430) 00:12:57.233 fused_ordering(431) 00:12:57.233 fused_ordering(432) 00:12:57.233 fused_ordering(433) 00:12:57.233 fused_ordering(434) 00:12:57.233 fused_ordering(435) 00:12:57.233 fused_ordering(436) 00:12:57.233 fused_ordering(437) 00:12:57.233 fused_ordering(438) 00:12:57.233 fused_ordering(439) 00:12:57.233 fused_ordering(440) 00:12:57.233 fused_ordering(441) 00:12:57.233 fused_ordering(442) 00:12:57.233 fused_ordering(443) 00:12:57.233 fused_ordering(444) 00:12:57.233 fused_ordering(445) 00:12:57.233 fused_ordering(446) 00:12:57.233 fused_ordering(447) 00:12:57.233 fused_ordering(448) 00:12:57.233 fused_ordering(449) 00:12:57.233 fused_ordering(450) 00:12:57.233 fused_ordering(451) 00:12:57.233 fused_ordering(452) 00:12:57.233 fused_ordering(453) 00:12:57.233 fused_ordering(454) 00:12:57.233 fused_ordering(455) 00:12:57.233 fused_ordering(456) 00:12:57.233 fused_ordering(457) 00:12:57.233 fused_ordering(458) 00:12:57.233 fused_ordering(459) 00:12:57.233 fused_ordering(460) 00:12:57.233 fused_ordering(461) 00:12:57.233 fused_ordering(462) 00:12:57.233 fused_ordering(463) 00:12:57.233 fused_ordering(464) 00:12:57.233 fused_ordering(465) 00:12:57.233 fused_ordering(466) 00:12:57.233 fused_ordering(467) 00:12:57.233 fused_ordering(468) 00:12:57.233 fused_ordering(469) 00:12:57.233 fused_ordering(470) 00:12:57.233 fused_ordering(471) 00:12:57.233 fused_ordering(472) 00:12:57.233 fused_ordering(473) 00:12:57.233 fused_ordering(474) 00:12:57.233 fused_ordering(475) 00:12:57.233 fused_ordering(476) 00:12:57.233 fused_ordering(477) 00:12:57.233 fused_ordering(478) 00:12:57.233 fused_ordering(479) 00:12:57.233 fused_ordering(480) 00:12:57.233 fused_ordering(481) 00:12:57.233 fused_ordering(482) 00:12:57.233 fused_ordering(483) 00:12:57.233 fused_ordering(484) 00:12:57.233 fused_ordering(485) 00:12:57.233 fused_ordering(486) 00:12:57.233 fused_ordering(487) 00:12:57.233 fused_ordering(488) 00:12:57.233 fused_ordering(489) 00:12:57.233 fused_ordering(490) 00:12:57.233 fused_ordering(491) 00:12:57.233 fused_ordering(492) 00:12:57.233 fused_ordering(493) 00:12:57.233 fused_ordering(494) 00:12:57.233 fused_ordering(495) 00:12:57.233 fused_ordering(496) 00:12:57.233 fused_ordering(497) 00:12:57.233 fused_ordering(498) 00:12:57.233 fused_ordering(499) 00:12:57.233 fused_ordering(500) 00:12:57.233 fused_ordering(501) 00:12:57.233 fused_ordering(502) 00:12:57.233 fused_ordering(503) 00:12:57.233 fused_ordering(504) 00:12:57.233 fused_ordering(505) 00:12:57.233 fused_ordering(506) 00:12:57.233 fused_ordering(507) 00:12:57.233 fused_ordering(508) 00:12:57.233 fused_ordering(509) 00:12:57.233 fused_ordering(510) 00:12:57.233 fused_ordering(511) 00:12:57.233 fused_ordering(512) 00:12:57.233 fused_ordering(513) 00:12:57.233 fused_ordering(514) 00:12:57.233 fused_ordering(515) 00:12:57.234 fused_ordering(516) 00:12:57.234 fused_ordering(517) 00:12:57.234 fused_ordering(518) 00:12:57.234 fused_ordering(519) 00:12:57.234 fused_ordering(520) 00:12:57.234 fused_ordering(521) 00:12:57.234 fused_ordering(522) 00:12:57.234 fused_ordering(523) 00:12:57.234 fused_ordering(524) 00:12:57.234 fused_ordering(525) 00:12:57.234 fused_ordering(526) 00:12:57.234 fused_ordering(527) 00:12:57.234 fused_ordering(528) 00:12:57.234 fused_ordering(529) 00:12:57.234 fused_ordering(530) 00:12:57.234 fused_ordering(531) 00:12:57.234 fused_ordering(532) 00:12:57.234 fused_ordering(533) 00:12:57.234 fused_ordering(534) 00:12:57.234 fused_ordering(535) 00:12:57.234 fused_ordering(536) 00:12:57.234 fused_ordering(537) 00:12:57.234 fused_ordering(538) 00:12:57.234 fused_ordering(539) 00:12:57.234 fused_ordering(540) 00:12:57.234 fused_ordering(541) 00:12:57.234 fused_ordering(542) 00:12:57.234 fused_ordering(543) 00:12:57.234 fused_ordering(544) 00:12:57.234 fused_ordering(545) 00:12:57.234 fused_ordering(546) 00:12:57.234 fused_ordering(547) 00:12:57.234 fused_ordering(548) 00:12:57.234 fused_ordering(549) 00:12:57.234 fused_ordering(550) 00:12:57.234 fused_ordering(551) 00:12:57.234 fused_ordering(552) 00:12:57.234 fused_ordering(553) 00:12:57.234 fused_ordering(554) 00:12:57.234 fused_ordering(555) 00:12:57.234 fused_ordering(556) 00:12:57.234 fused_ordering(557) 00:12:57.234 fused_ordering(558) 00:12:57.234 fused_ordering(559) 00:12:57.234 fused_ordering(560) 00:12:57.234 fused_ordering(561) 00:12:57.234 fused_ordering(562) 00:12:57.234 fused_ordering(563) 00:12:57.234 fused_ordering(564) 00:12:57.234 fused_ordering(565) 00:12:57.234 fused_ordering(566) 00:12:57.234 fused_ordering(567) 00:12:57.234 fused_ordering(568) 00:12:57.234 fused_ordering(569) 00:12:57.234 fused_ordering(570) 00:12:57.234 fused_ordering(571) 00:12:57.234 fused_ordering(572) 00:12:57.234 fused_ordering(573) 00:12:57.234 fused_ordering(574) 00:12:57.234 fused_ordering(575) 00:12:57.234 fused_ordering(576) 00:12:57.234 fused_ordering(577) 00:12:57.234 fused_ordering(578) 00:12:57.234 fused_ordering(579) 00:12:57.234 fused_ordering(580) 00:12:57.234 fused_ordering(581) 00:12:57.234 fused_ordering(582) 00:12:57.234 fused_ordering(583) 00:12:57.234 fused_ordering(584) 00:12:57.234 fused_ordering(585) 00:12:57.234 fused_ordering(586) 00:12:57.234 fused_ordering(587) 00:12:57.234 fused_ordering(588) 00:12:57.234 fused_ordering(589) 00:12:57.234 fused_ordering(590) 00:12:57.234 fused_ordering(591) 00:12:57.234 fused_ordering(592) 00:12:57.234 fused_ordering(593) 00:12:57.234 fused_ordering(594) 00:12:57.234 fused_ordering(595) 00:12:57.234 fused_ordering(596) 00:12:57.234 fused_ordering(597) 00:12:57.234 fused_ordering(598) 00:12:57.234 fused_ordering(599) 00:12:57.234 fused_ordering(600) 00:12:57.234 fused_ordering(601) 00:12:57.234 fused_ordering(602) 00:12:57.234 fused_ordering(603) 00:12:57.234 fused_ordering(604) 00:12:57.234 fused_ordering(605) 00:12:57.234 fused_ordering(606) 00:12:57.234 fused_ordering(607) 00:12:57.234 fused_ordering(608) 00:12:57.234 fused_ordering(609) 00:12:57.234 fused_ordering(610) 00:12:57.234 fused_ordering(611) 00:12:57.234 fused_ordering(612) 00:12:57.234 fused_ordering(613) 00:12:57.234 fused_ordering(614) 00:12:57.234 fused_ordering(615) 00:12:57.494 fused_ordering(616) 00:12:57.494 fused_ordering(617) 00:12:57.494 fused_ordering(618) 00:12:57.494 fused_ordering(619) 00:12:57.494 fused_ordering(620) 00:12:57.494 fused_ordering(621) 00:12:57.494 fused_ordering(622) 00:12:57.494 fused_ordering(623) 00:12:57.494 fused_ordering(624) 00:12:57.494 fused_ordering(625) 00:12:57.494 fused_ordering(626) 00:12:57.494 fused_ordering(627) 00:12:57.494 fused_ordering(628) 00:12:57.494 fused_ordering(629) 00:12:57.494 fused_ordering(630) 00:12:57.494 fused_ordering(631) 00:12:57.494 fused_ordering(632) 00:12:57.494 fused_ordering(633) 00:12:57.494 fused_ordering(634) 00:12:57.494 fused_ordering(635) 00:12:57.494 fused_ordering(636) 00:12:57.494 fused_ordering(637) 00:12:57.494 fused_ordering(638) 00:12:57.494 fused_ordering(639) 00:12:57.494 fused_ordering(640) 00:12:57.494 fused_ordering(641) 00:12:57.494 fused_ordering(642) 00:12:57.494 fused_ordering(643) 00:12:57.494 fused_ordering(644) 00:12:57.494 fused_ordering(645) 00:12:57.494 fused_ordering(646) 00:12:57.494 fused_ordering(647) 00:12:57.494 fused_ordering(648) 00:12:57.494 fused_ordering(649) 00:12:57.494 fused_ordering(650) 00:12:57.494 fused_ordering(651) 00:12:57.494 fused_ordering(652) 00:12:57.494 fused_ordering(653) 00:12:57.494 fused_ordering(654) 00:12:57.494 fused_ordering(655) 00:12:57.494 fused_ordering(656) 00:12:57.494 fused_ordering(657) 00:12:57.494 fused_ordering(658) 00:12:57.494 fused_ordering(659) 00:12:57.494 fused_ordering(660) 00:12:57.494 fused_ordering(661) 00:12:57.494 fused_ordering(662) 00:12:57.494 fused_ordering(663) 00:12:57.494 fused_ordering(664) 00:12:57.494 fused_ordering(665) 00:12:57.494 fused_ordering(666) 00:12:57.494 fused_ordering(667) 00:12:57.494 fused_ordering(668) 00:12:57.494 fused_ordering(669) 00:12:57.494 fused_ordering(670) 00:12:57.494 fused_ordering(671) 00:12:57.494 fused_ordering(672) 00:12:57.494 fused_ordering(673) 00:12:57.494 fused_ordering(674) 00:12:57.494 fused_ordering(675) 00:12:57.494 fused_ordering(676) 00:12:57.494 fused_ordering(677) 00:12:57.494 fused_ordering(678) 00:12:57.494 fused_ordering(679) 00:12:57.494 fused_ordering(680) 00:12:57.494 fused_ordering(681) 00:12:57.494 fused_ordering(682) 00:12:57.494 fused_ordering(683) 00:12:57.494 fused_ordering(684) 00:12:57.494 fused_ordering(685) 00:12:57.494 fused_ordering(686) 00:12:57.494 fused_ordering(687) 00:12:57.494 fused_ordering(688) 00:12:57.494 fused_ordering(689) 00:12:57.494 fused_ordering(690) 00:12:57.494 fused_ordering(691) 00:12:57.494 fused_ordering(692) 00:12:57.494 fused_ordering(693) 00:12:57.494 fused_ordering(694) 00:12:57.494 fused_ordering(695) 00:12:57.494 fused_ordering(696) 00:12:57.494 fused_ordering(697) 00:12:57.494 fused_ordering(698) 00:12:57.494 fused_ordering(699) 00:12:57.494 fused_ordering(700) 00:12:57.494 fused_ordering(701) 00:12:57.494 fused_ordering(702) 00:12:57.494 fused_ordering(703) 00:12:57.494 fused_ordering(704) 00:12:57.494 fused_ordering(705) 00:12:57.494 fused_ordering(706) 00:12:57.494 fused_ordering(707) 00:12:57.494 fused_ordering(708) 00:12:57.494 fused_ordering(709) 00:12:57.494 fused_ordering(710) 00:12:57.494 fused_ordering(711) 00:12:57.494 fused_ordering(712) 00:12:57.494 fused_ordering(713) 00:12:57.494 fused_ordering(714) 00:12:57.494 fused_ordering(715) 00:12:57.494 fused_ordering(716) 00:12:57.494 fused_ordering(717) 00:12:57.494 fused_ordering(718) 00:12:57.494 fused_ordering(719) 00:12:57.494 fused_ordering(720) 00:12:57.494 fused_ordering(721) 00:12:57.494 fused_ordering(722) 00:12:57.494 fused_ordering(723) 00:12:57.494 fused_ordering(724) 00:12:57.494 fused_ordering(725) 00:12:57.494 fused_ordering(726) 00:12:57.494 fused_ordering(727) 00:12:57.494 fused_ordering(728) 00:12:57.494 fused_ordering(729) 00:12:57.494 fused_ordering(730) 00:12:57.494 fused_ordering(731) 00:12:57.494 fused_ordering(732) 00:12:57.494 fused_ordering(733) 00:12:57.494 fused_ordering(734) 00:12:57.494 fused_ordering(735) 00:12:57.494 fused_ordering(736) 00:12:57.494 fused_ordering(737) 00:12:57.494 fused_ordering(738) 00:12:57.494 fused_ordering(739) 00:12:57.494 fused_ordering(740) 00:12:57.494 fused_ordering(741) 00:12:57.494 fused_ordering(742) 00:12:57.494 fused_ordering(743) 00:12:57.494 fused_ordering(744) 00:12:57.494 fused_ordering(745) 00:12:57.495 fused_ordering(746) 00:12:57.495 fused_ordering(747) 00:12:57.495 fused_ordering(748) 00:12:57.495 fused_ordering(749) 00:12:57.495 fused_ordering(750) 00:12:57.495 fused_ordering(751) 00:12:57.495 fused_ordering(752) 00:12:57.495 fused_ordering(753) 00:12:57.495 fused_ordering(754) 00:12:57.495 fused_ordering(755) 00:12:57.495 fused_ordering(756) 00:12:57.495 fused_ordering(757) 00:12:57.495 fused_ordering(758) 00:12:57.495 fused_ordering(759) 00:12:57.495 fused_ordering(760) 00:12:57.495 fused_ordering(761) 00:12:57.495 fused_ordering(762) 00:12:57.495 fused_ordering(763) 00:12:57.495 fused_ordering(764) 00:12:57.495 fused_ordering(765) 00:12:57.495 fused_ordering(766) 00:12:57.495 fused_ordering(767) 00:12:57.495 fused_ordering(768) 00:12:57.495 fused_ordering(769) 00:12:57.495 fused_ordering(770) 00:12:57.495 fused_ordering(771) 00:12:57.495 fused_ordering(772) 00:12:57.495 fused_ordering(773) 00:12:57.495 fused_ordering(774) 00:12:57.495 fused_ordering(775) 00:12:57.495 fused_ordering(776) 00:12:57.495 fused_ordering(777) 00:12:57.495 fused_ordering(778) 00:12:57.495 fused_ordering(779) 00:12:57.495 fused_ordering(780) 00:12:57.495 fused_ordering(781) 00:12:57.495 fused_ordering(782) 00:12:57.495 fused_ordering(783) 00:12:57.495 fused_ordering(784) 00:12:57.495 fused_ordering(785) 00:12:57.495 fused_ordering(786) 00:12:57.495 fused_ordering(787) 00:12:57.495 fused_ordering(788) 00:12:57.495 fused_ordering(789) 00:12:57.495 fused_ordering(790) 00:12:57.495 fused_ordering(791) 00:12:57.495 fused_ordering(792) 00:12:57.495 fused_ordering(793) 00:12:57.495 fused_ordering(794) 00:12:57.495 fused_ordering(795) 00:12:57.495 fused_ordering(796) 00:12:57.495 fused_ordering(797) 00:12:57.495 fused_ordering(798) 00:12:57.495 fused_ordering(799) 00:12:57.495 fused_ordering(800) 00:12:57.495 fused_ordering(801) 00:12:57.495 fused_ordering(802) 00:12:57.495 fused_ordering(803) 00:12:57.495 fused_ordering(804) 00:12:57.495 fused_ordering(805) 00:12:57.495 fused_ordering(806) 00:12:57.495 fused_ordering(807) 00:12:57.495 fused_ordering(808) 00:12:57.495 fused_ordering(809) 00:12:57.495 fused_ordering(810) 00:12:57.495 fused_ordering(811) 00:12:57.495 fused_ordering(812) 00:12:57.495 fused_ordering(813) 00:12:57.495 fused_ordering(814) 00:12:57.495 fused_ordering(815) 00:12:57.495 fused_ordering(816) 00:12:57.495 fused_ordering(817) 00:12:57.495 fused_ordering(818) 00:12:57.495 fused_ordering(819) 00:12:57.495 fused_ordering(820) 00:12:58.064 fused_o[2024-12-11 14:53:50.964236] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x772d80 is same with the state(6) to be set 00:12:58.064 rdering(821) 00:12:58.064 fused_ordering(822) 00:12:58.064 fused_ordering(823) 00:12:58.064 fused_ordering(824) 00:12:58.064 fused_ordering(825) 00:12:58.064 fused_ordering(826) 00:12:58.064 fused_ordering(827) 00:12:58.064 fused_ordering(828) 00:12:58.064 fused_ordering(829) 00:12:58.064 fused_ordering(830) 00:12:58.064 fused_ordering(831) 00:12:58.064 fused_ordering(832) 00:12:58.064 fused_ordering(833) 00:12:58.064 fused_ordering(834) 00:12:58.064 fused_ordering(835) 00:12:58.064 fused_ordering(836) 00:12:58.064 fused_ordering(837) 00:12:58.064 fused_ordering(838) 00:12:58.064 fused_ordering(839) 00:12:58.064 fused_ordering(840) 00:12:58.064 fused_ordering(841) 00:12:58.064 fused_ordering(842) 00:12:58.064 fused_ordering(843) 00:12:58.064 fused_ordering(844) 00:12:58.064 fused_ordering(845) 00:12:58.064 fused_ordering(846) 00:12:58.064 fused_ordering(847) 00:12:58.064 fused_ordering(848) 00:12:58.064 fused_ordering(849) 00:12:58.064 fused_ordering(850) 00:12:58.064 fused_ordering(851) 00:12:58.064 fused_ordering(852) 00:12:58.064 fused_ordering(853) 00:12:58.064 fused_ordering(854) 00:12:58.064 fused_ordering(855) 00:12:58.064 fused_ordering(856) 00:12:58.064 fused_ordering(857) 00:12:58.064 fused_ordering(858) 00:12:58.064 fused_ordering(859) 00:12:58.064 fused_ordering(860) 00:12:58.064 fused_ordering(861) 00:12:58.064 fused_ordering(862) 00:12:58.064 fused_ordering(863) 00:12:58.064 fused_ordering(864) 00:12:58.064 fused_ordering(865) 00:12:58.064 fused_ordering(866) 00:12:58.064 fused_ordering(867) 00:12:58.064 fused_ordering(868) 00:12:58.064 fused_ordering(869) 00:12:58.064 fused_ordering(870) 00:12:58.064 fused_ordering(871) 00:12:58.064 fused_ordering(872) 00:12:58.064 fused_ordering(873) 00:12:58.064 fused_ordering(874) 00:12:58.064 fused_ordering(875) 00:12:58.064 fused_ordering(876) 00:12:58.064 fused_ordering(877) 00:12:58.064 fused_ordering(878) 00:12:58.064 fused_ordering(879) 00:12:58.064 fused_ordering(880) 00:12:58.064 fused_ordering(881) 00:12:58.064 fused_ordering(882) 00:12:58.064 fused_ordering(883) 00:12:58.064 fused_ordering(884) 00:12:58.064 fused_ordering(885) 00:12:58.064 fused_ordering(886) 00:12:58.064 fused_ordering(887) 00:12:58.064 fused_ordering(888) 00:12:58.064 fused_ordering(889) 00:12:58.064 fused_ordering(890) 00:12:58.064 fused_ordering(891) 00:12:58.064 fused_ordering(892) 00:12:58.064 fused_ordering(893) 00:12:58.064 fused_ordering(894) 00:12:58.064 fused_ordering(895) 00:12:58.064 fused_ordering(896) 00:12:58.064 fused_ordering(897) 00:12:58.064 fused_ordering(898) 00:12:58.065 fused_ordering(899) 00:12:58.065 fused_ordering(900) 00:12:58.065 fused_ordering(901) 00:12:58.065 fused_ordering(902) 00:12:58.065 fused_ordering(903) 00:12:58.065 fused_ordering(904) 00:12:58.065 fused_ordering(905) 00:12:58.065 fused_ordering(906) 00:12:58.065 fused_ordering(907) 00:12:58.065 fused_ordering(908) 00:12:58.065 fused_ordering(909) 00:12:58.065 fused_ordering(910) 00:12:58.065 fused_ordering(911) 00:12:58.065 fused_ordering(912) 00:12:58.065 fused_ordering(913) 00:12:58.065 fused_ordering(914) 00:12:58.065 fused_ordering(915) 00:12:58.065 fused_ordering(916) 00:12:58.065 fused_ordering(917) 00:12:58.065 fused_ordering(918) 00:12:58.065 fused_ordering(919) 00:12:58.065 fused_ordering(920) 00:12:58.065 fused_ordering(921) 00:12:58.065 fused_ordering(922) 00:12:58.065 fused_ordering(923) 00:12:58.065 fused_ordering(924) 00:12:58.065 fused_ordering(925) 00:12:58.065 fused_ordering(926) 00:12:58.065 fused_ordering(927) 00:12:58.065 fused_ordering(928) 00:12:58.065 fused_ordering(929) 00:12:58.065 fused_ordering(930) 00:12:58.065 fused_ordering(931) 00:12:58.065 fused_ordering(932) 00:12:58.065 fused_ordering(933) 00:12:58.065 fused_ordering(934) 00:12:58.065 fused_ordering(935) 00:12:58.065 fused_ordering(936) 00:12:58.065 fused_ordering(937) 00:12:58.065 fused_ordering(938) 00:12:58.065 fused_ordering(939) 00:12:58.065 fused_ordering(940) 00:12:58.065 fused_ordering(941) 00:12:58.065 fused_ordering(942) 00:12:58.065 fused_ordering(943) 00:12:58.065 fused_ordering(944) 00:12:58.065 fused_ordering(945) 00:12:58.065 fused_ordering(946) 00:12:58.065 fused_ordering(947) 00:12:58.065 fused_ordering(948) 00:12:58.065 fused_ordering(949) 00:12:58.065 fused_ordering(950) 00:12:58.065 fused_ordering(951) 00:12:58.065 fused_ordering(952) 00:12:58.065 fused_ordering(953) 00:12:58.065 fused_ordering(954) 00:12:58.065 fused_ordering(955) 00:12:58.065 fused_ordering(956) 00:12:58.065 fused_ordering(957) 00:12:58.065 fused_ordering(958) 00:12:58.065 fused_ordering(959) 00:12:58.065 fused_ordering(960) 00:12:58.065 fused_ordering(961) 00:12:58.065 fused_ordering(962) 00:12:58.065 fused_ordering(963) 00:12:58.065 fused_ordering(964) 00:12:58.065 fused_ordering(965) 00:12:58.065 fused_ordering(966) 00:12:58.065 fused_ordering(967) 00:12:58.065 fused_ordering(968) 00:12:58.065 fused_ordering(969) 00:12:58.065 fused_ordering(970) 00:12:58.065 fused_ordering(971) 00:12:58.065 fused_ordering(972) 00:12:58.065 fused_ordering(973) 00:12:58.065 fused_ordering(974) 00:12:58.065 fused_ordering(975) 00:12:58.065 fused_ordering(976) 00:12:58.065 fused_ordering(977) 00:12:58.065 fused_ordering(978) 00:12:58.065 fused_ordering(979) 00:12:58.065 fused_ordering(980) 00:12:58.065 fused_ordering(981) 00:12:58.065 fused_ordering(982) 00:12:58.065 fused_ordering(983) 00:12:58.065 fused_ordering(984) 00:12:58.065 fused_ordering(985) 00:12:58.065 fused_ordering(986) 00:12:58.065 fused_ordering(987) 00:12:58.065 fused_ordering(988) 00:12:58.065 fused_ordering(989) 00:12:58.065 fused_ordering(990) 00:12:58.065 fused_ordering(991) 00:12:58.065 fused_ordering(992) 00:12:58.065 fused_ordering(993) 00:12:58.065 fused_ordering(994) 00:12:58.065 fused_ordering(995) 00:12:58.065 fused_ordering(996) 00:12:58.065 fused_ordering(997) 00:12:58.065 fused_ordering(998) 00:12:58.065 fused_ordering(999) 00:12:58.065 fused_ordering(1000) 00:12:58.065 fused_ordering(1001) 00:12:58.065 fused_ordering(1002) 00:12:58.065 fused_ordering(1003) 00:12:58.065 fused_ordering(1004) 00:12:58.065 fused_ordering(1005) 00:12:58.065 fused_ordering(1006) 00:12:58.065 fused_ordering(1007) 00:12:58.065 fused_ordering(1008) 00:12:58.065 fused_ordering(1009) 00:12:58.065 fused_ordering(1010) 00:12:58.065 fused_ordering(1011) 00:12:58.065 fused_ordering(1012) 00:12:58.065 fused_ordering(1013) 00:12:58.065 fused_ordering(1014) 00:12:58.065 fused_ordering(1015) 00:12:58.065 fused_ordering(1016) 00:12:58.065 fused_ordering(1017) 00:12:58.065 fused_ordering(1018) 00:12:58.065 fused_ordering(1019) 00:12:58.065 fused_ordering(1020) 00:12:58.065 fused_ordering(1021) 00:12:58.065 fused_ordering(1022) 00:12:58.065 fused_ordering(1023) 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.065 14:53:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.065 rmmod nvme_tcp 00:12:58.065 rmmod nvme_fabrics 00:12:58.065 rmmod nvme_keyring 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3061264 ']' 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3061264 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3061264 ']' 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3061264 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061264 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061264' 00:12:58.065 killing process with pid 3061264 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3061264 00:12:58.065 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3061264 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.325 14:53:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.863 00:13:00.863 real 0m10.617s 00:13:00.863 user 0m4.998s 00:13:00.863 sys 0m5.729s 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:00.863 ************************************ 00:13:00.863 END TEST nvmf_fused_ordering 00:13:00.863 ************************************ 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.863 ************************************ 00:13:00.863 START TEST nvmf_ns_masking 00:13:00.863 ************************************ 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:00.863 * Looking for test storage... 00:13:00.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.863 --rc genhtml_branch_coverage=1 00:13:00.863 --rc genhtml_function_coverage=1 00:13:00.863 --rc genhtml_legend=1 00:13:00.863 --rc geninfo_all_blocks=1 00:13:00.863 --rc geninfo_unexecuted_blocks=1 00:13:00.863 00:13:00.863 ' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.863 --rc genhtml_branch_coverage=1 00:13:00.863 --rc genhtml_function_coverage=1 00:13:00.863 --rc genhtml_legend=1 00:13:00.863 --rc geninfo_all_blocks=1 00:13:00.863 --rc geninfo_unexecuted_blocks=1 00:13:00.863 00:13:00.863 ' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.863 --rc genhtml_branch_coverage=1 00:13:00.863 --rc genhtml_function_coverage=1 00:13:00.863 --rc genhtml_legend=1 00:13:00.863 --rc geninfo_all_blocks=1 00:13:00.863 --rc geninfo_unexecuted_blocks=1 00:13:00.863 00:13:00.863 ' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.863 --rc genhtml_branch_coverage=1 00:13:00.863 --rc genhtml_function_coverage=1 00:13:00.863 --rc genhtml_legend=1 00:13:00.863 --rc geninfo_all_blocks=1 00:13:00.863 --rc geninfo_unexecuted_blocks=1 00:13:00.863 00:13:00.863 ' 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.863 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=7f62bad8-1037-4c19-9ef9-37dea2cf3add 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1bfd4850-fac0-4f90-8f0f-309d05696460 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f940ed0b-64fa-4bf7-a034-6c17c66376b5 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.864 14:53:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:07.438 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:07.438 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:07.438 Found net devices under 0000:86:00.0: cvl_0_0 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:07.438 Found net devices under 0000:86:00.1: cvl_0_1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:07.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:13:07.438 00:13:07.438 --- 10.0.0.2 ping statistics --- 00:13:07.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.438 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:07.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:13:07.438 00:13:07.438 --- 10.0.0.1 ping statistics --- 00:13:07.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.438 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3065127 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:07.438 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3065127 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3065127 ']' 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.439 [2024-12-11 14:53:59.653545] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:07.439 [2024-12-11 14:53:59.653588] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:07.439 [2024-12-11 14:53:59.731828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.439 [2024-12-11 14:53:59.770182] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:07.439 [2024-12-11 14:53:59.770214] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:07.439 [2024-12-11 14:53:59.770221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:07.439 [2024-12-11 14:53:59.770227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:07.439 [2024-12-11 14:53:59.770233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:07.439 [2024-12-11 14:53:59.770769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:07.439 14:53:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:07.439 [2024-12-11 14:54:00.095258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:07.439 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:07.439 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:07.439 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:07.439 Malloc1 00:13:07.439 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:07.697 Malloc2 00:13:07.697 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:07.956 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:07.956 14:54:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:08.214 [2024-12-11 14:54:01.168595] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:08.214 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:08.214 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f940ed0b-64fa-4bf7-a034-6c17c66376b5 -a 10.0.0.2 -s 4420 -i 4 00:13:08.473 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:08.473 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:08.473 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.473 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:08.473 14:54:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:10.379 [ 0]:0x1 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:10.379 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9fd02bde6c54bbaa389b338522d1e0c 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9fd02bde6c54bbaa389b338522d1e0c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:10.638 [ 0]:0x1 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:10.638 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9fd02bde6c54bbaa389b338522d1e0c 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9fd02bde6c54bbaa389b338522d1e0c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:10.898 [ 1]:0x2 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.898 14:54:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.157 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f940ed0b-64fa-4bf7-a034-6c17c66376b5 -a 10.0.0.2 -s 4420 -i 4 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:11.416 14:54:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:13.953 [ 0]:0x2 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:13.953 [ 0]:0x1 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9fd02bde6c54bbaa389b338522d1e0c 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9fd02bde6c54bbaa389b338522d1e0c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:13.953 [ 1]:0x2 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:13.953 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:13.954 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:13.954 14:54:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:14.213 [ 0]:0x2 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:14.213 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:14.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.472 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:14.473 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:14.473 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f940ed0b-64fa-4bf7-a034-6c17c66376b5 -a 10.0.0.2 -s 4420 -i 4 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:14.732 14:54:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:16.638 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:16.638 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:16.638 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:16.897 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:16.898 [ 0]:0x1 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:16.898 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.157 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f9fd02bde6c54bbaa389b338522d1e0c 00:13:17.157 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f9fd02bde6c54bbaa389b338522d1e0c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.157 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:17.157 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.157 14:54:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.157 [ 1]:0x2 00:13:17.157 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.157 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.157 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:17.157 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.157 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.417 [ 0]:0x2 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:13:17.417 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:17.677 [2024-12-11 14:54:10.563254] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:17.677 request: 00:13:17.677 { 00:13:17.677 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:17.677 "nsid": 2, 00:13:17.677 "host": "nqn.2016-06.io.spdk:host1", 00:13:17.677 "method": "nvmf_ns_remove_host", 00:13:17.677 "req_id": 1 00:13:17.677 } 00:13:17.677 Got JSON-RPC error response 00:13:17.677 response: 00:13:17.677 { 00:13:17.677 "code": -32602, 00:13:17.677 "message": "Invalid parameters" 00:13:17.677 } 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:17.677 [ 0]:0x2 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bfbc5291c1d344e3aa39fd21b5d2cf77 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bfbc5291c1d344e3aa39fd21b5d2cf77 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:17.677 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3067641 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3067641 /var/tmp/host.sock 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3067641 ']' 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:17.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.937 14:54:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:17.937 [2024-12-11 14:54:10.784972] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:17.937 [2024-12-11 14:54:10.785018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3067641 ] 00:13:17.937 [2024-12-11 14:54:10.863049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.937 [2024-12-11 14:54:10.902776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.196 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.196 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:18.196 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.456 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 7f62bad8-1037-4c19-9ef9-37dea2cf3add 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7F62BAD810374C199EF937DEA2CF3ADD -i 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1bfd4850-fac0-4f90-8f0f-309d05696460 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:18.715 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1BFD4850FAC04F908F0F309D05696460 -i 00:13:18.975 14:54:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:19.234 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:19.493 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:19.493 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:19.752 nvme0n1 00:13:19.752 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:19.752 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:20.010 nvme1n2 00:13:20.010 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:20.010 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:20.010 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:20.010 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:20.010 14:54:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:20.268 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:20.268 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:20.268 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:20.268 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 7f62bad8-1037-4c19-9ef9-37dea2cf3add == \7\f\6\2\b\a\d\8\-\1\0\3\7\-\4\c\1\9\-\9\e\f\9\-\3\7\d\e\a\2\c\f\3\a\d\d ]] 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1bfd4850-fac0-4f90-8f0f-309d05696460 == \1\b\f\d\4\8\5\0\-\f\a\c\0\-\4\f\9\0\-\8\f\0\f\-\3\0\9\d\0\5\6\9\6\4\6\0 ]] 00:13:20.527 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.786 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 7f62bad8-1037-4c19-9ef9-37dea2cf3add 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7F62BAD810374C199EF937DEA2CF3ADD 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7F62BAD810374C199EF937DEA2CF3ADD 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.045 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:21.046 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:13:21.046 14:54:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 7F62BAD810374C199EF937DEA2CF3ADD 00:13:21.305 [2024-12-11 14:54:14.125097] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:21.305 [2024-12-11 14:54:14.125129] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:21.305 [2024-12-11 14:54:14.125142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.305 request: 00:13:21.305 { 00:13:21.305 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:21.305 "namespace": { 00:13:21.305 "bdev_name": "invalid", 00:13:21.305 "nsid": 1, 00:13:21.305 "nguid": "7F62BAD810374C199EF937DEA2CF3ADD", 00:13:21.305 "no_auto_visible": false, 00:13:21.305 "hide_metadata": false 00:13:21.305 }, 00:13:21.305 "method": "nvmf_subsystem_add_ns", 00:13:21.305 "req_id": 1 00:13:21.305 } 00:13:21.305 Got JSON-RPC error response 00:13:21.305 response: 00:13:21.305 { 00:13:21.305 "code": -32602, 00:13:21.305 "message": "Invalid parameters" 00:13:21.305 } 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 7f62bad8-1037-4c19-9ef9-37dea2cf3add 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 7F62BAD810374C199EF937DEA2CF3ADD -i 00:13:21.305 14:54:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3067641 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3067641 ']' 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3067641 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3067641 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3067641' 00:13:23.839 killing process with pid 3067641 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3067641 00:13:23.839 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3067641 00:13:24.098 14:54:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:24.098 rmmod nvme_tcp 00:13:24.098 rmmod nvme_fabrics 00:13:24.098 rmmod nvme_keyring 00:13:24.098 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3065127 ']' 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3065127 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3065127 ']' 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3065127 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:24.357 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065127 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065127' 00:13:24.358 killing process with pid 3065127 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3065127 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3065127 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:24.358 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.617 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.617 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.617 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.617 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.617 14:54:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.525 00:13:26.525 real 0m26.079s 00:13:26.525 user 0m31.205s 00:13:26.525 sys 0m7.109s 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:26.525 ************************************ 00:13:26.525 END TEST nvmf_ns_masking 00:13:26.525 ************************************ 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.525 ************************************ 00:13:26.525 START TEST nvmf_nvme_cli 00:13:26.525 ************************************ 00:13:26.525 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:26.786 * Looking for test storage... 00:13:26.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.786 --rc genhtml_branch_coverage=1 00:13:26.786 --rc genhtml_function_coverage=1 00:13:26.786 --rc genhtml_legend=1 00:13:26.786 --rc geninfo_all_blocks=1 00:13:26.786 --rc geninfo_unexecuted_blocks=1 00:13:26.786 00:13:26.786 ' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.786 --rc genhtml_branch_coverage=1 00:13:26.786 --rc genhtml_function_coverage=1 00:13:26.786 --rc genhtml_legend=1 00:13:26.786 --rc geninfo_all_blocks=1 00:13:26.786 --rc geninfo_unexecuted_blocks=1 00:13:26.786 00:13:26.786 ' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.786 --rc genhtml_branch_coverage=1 00:13:26.786 --rc genhtml_function_coverage=1 00:13:26.786 --rc genhtml_legend=1 00:13:26.786 --rc geninfo_all_blocks=1 00:13:26.786 --rc geninfo_unexecuted_blocks=1 00:13:26.786 00:13:26.786 ' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.786 --rc genhtml_branch_coverage=1 00:13:26.786 --rc genhtml_function_coverage=1 00:13:26.786 --rc genhtml_legend=1 00:13:26.786 --rc geninfo_all_blocks=1 00:13:26.786 --rc geninfo_unexecuted_blocks=1 00:13:26.786 00:13:26.786 ' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.786 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.787 14:54:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.381 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.381 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.381 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:33.381 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.382 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:33.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:13:33.382 00:13:33.382 --- 10.0.0.2 ping statistics --- 00:13:33.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.382 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:13:33.382 00:13:33.382 --- 10.0.0.1 ping statistics --- 00:13:33.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.382 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3072357 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3072357 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3072357 ']' 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 [2024-12-11 14:54:25.781720] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:33.382 [2024-12-11 14:54:25.781766] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.382 [2024-12-11 14:54:25.862187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.382 [2024-12-11 14:54:25.904925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.382 [2024-12-11 14:54:25.904960] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.382 [2024-12-11 14:54:25.904967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.382 [2024-12-11 14:54:25.904973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.382 [2024-12-11 14:54:25.904978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.382 [2024-12-11 14:54:25.906363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.382 [2024-12-11 14:54:25.906493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.382 [2024-12-11 14:54:25.906603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.382 [2024-12-11 14:54:25.906604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.382 14:54:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 [2024-12-11 14:54:26.039694] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 Malloc0 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.382 Malloc1 00:13:33.382 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.383 [2024-12-11 14:54:26.119456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:33.383 00:13:33.383 Discovery Log Number of Records 2, Generation counter 2 00:13:33.383 =====Discovery Log Entry 0====== 00:13:33.383 trtype: tcp 00:13:33.383 adrfam: ipv4 00:13:33.383 subtype: current discovery subsystem 00:13:33.383 treq: not required 00:13:33.383 portid: 0 00:13:33.383 trsvcid: 4420 00:13:33.383 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:33.383 traddr: 10.0.0.2 00:13:33.383 eflags: explicit discovery connections, duplicate discovery information 00:13:33.383 sectype: none 00:13:33.383 =====Discovery Log Entry 1====== 00:13:33.383 trtype: tcp 00:13:33.383 adrfam: ipv4 00:13:33.383 subtype: nvme subsystem 00:13:33.383 treq: not required 00:13:33.383 portid: 0 00:13:33.383 trsvcid: 4420 00:13:33.383 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:33.383 traddr: 10.0.0.2 00:13:33.383 eflags: none 00:13:33.383 sectype: none 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:33.383 14:54:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:34.430 14:54:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.336 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.336 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.336 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:36.595 /dev/nvme0n2 ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:36.595 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:36.596 rmmod nvme_tcp 00:13:36.596 rmmod nvme_fabrics 00:13:36.596 rmmod nvme_keyring 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3072357 ']' 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3072357 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3072357 ']' 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3072357 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.596 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3072357 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3072357' 00:13:36.855 killing process with pid 3072357 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3072357 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3072357 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.855 14:54:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:39.396 00:13:39.396 real 0m12.406s 00:13:39.396 user 0m17.498s 00:13:39.396 sys 0m5.108s 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:39.396 ************************************ 00:13:39.396 END TEST nvmf_nvme_cli 00:13:39.396 ************************************ 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.396 14:54:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:39.396 ************************************ 00:13:39.396 START TEST nvmf_vfio_user 00:13:39.396 ************************************ 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:39.396 * Looking for test storage... 00:13:39.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.396 --rc genhtml_branch_coverage=1 00:13:39.396 --rc genhtml_function_coverage=1 00:13:39.396 --rc genhtml_legend=1 00:13:39.396 --rc geninfo_all_blocks=1 00:13:39.396 --rc geninfo_unexecuted_blocks=1 00:13:39.396 00:13:39.396 ' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.396 --rc genhtml_branch_coverage=1 00:13:39.396 --rc genhtml_function_coverage=1 00:13:39.396 --rc genhtml_legend=1 00:13:39.396 --rc geninfo_all_blocks=1 00:13:39.396 --rc geninfo_unexecuted_blocks=1 00:13:39.396 00:13:39.396 ' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.396 --rc genhtml_branch_coverage=1 00:13:39.396 --rc genhtml_function_coverage=1 00:13:39.396 --rc genhtml_legend=1 00:13:39.396 --rc geninfo_all_blocks=1 00:13:39.396 --rc geninfo_unexecuted_blocks=1 00:13:39.396 00:13:39.396 ' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.396 --rc genhtml_branch_coverage=1 00:13:39.396 --rc genhtml_function_coverage=1 00:13:39.396 --rc genhtml_legend=1 00:13:39.396 --rc geninfo_all_blocks=1 00:13:39.396 --rc geninfo_unexecuted_blocks=1 00:13:39.396 00:13:39.396 ' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.396 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:39.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3073441 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3073441' 00:13:39.397 Process pid: 3073441 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3073441 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3073441 ']' 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:39.397 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:39.397 [2024-12-11 14:54:32.286936] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:39.397 [2024-12-11 14:54:32.286984] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.397 [2024-12-11 14:54:32.360257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.397 [2024-12-11 14:54:32.400066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.397 [2024-12-11 14:54:32.400103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.397 [2024-12-11 14:54:32.400110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.397 [2024-12-11 14:54:32.400116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.397 [2024-12-11 14:54:32.400121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.397 [2024-12-11 14:54:32.401662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.397 [2024-12-11 14:54:32.401772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.397 [2024-12-11 14:54:32.401875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.397 [2024-12-11 14:54:32.401876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.656 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:39.656 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:39.656 14:54:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:40.592 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:40.851 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:40.851 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:40.851 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:40.851 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:40.851 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.109 Malloc1 00:13:41.109 14:54:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:41.367 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:41.367 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:41.626 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.626 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:41.626 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:41.884 Malloc2 00:13:41.885 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:42.143 14:54:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:42.143 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:42.401 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:42.401 [2024-12-11 14:54:35.400439] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:13:42.401 [2024-12-11 14:54:35.400478] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074115 ] 00:13:42.401 [2024-12-11 14:54:35.441051] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:42.662 [2024-12-11 14:54:35.449499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:42.662 [2024-12-11 14:54:35.449521] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f553ca51000 00:13:42.662 [2024-12-11 14:54:35.450494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.451499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.452502] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.453508] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.454511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.455517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.456517] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.457529] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:42.662 [2024-12-11 14:54:35.458533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:42.662 [2024-12-11 14:54:35.458542] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f553ca46000 00:13:42.662 [2024-12-11 14:54:35.459484] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:42.662 [2024-12-11 14:54:35.469085] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:42.662 [2024-12-11 14:54:35.469116] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:42.662 [2024-12-11 14:54:35.477644] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:42.662 [2024-12-11 14:54:35.477683] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:42.662 [2024-12-11 14:54:35.477757] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:42.662 [2024-12-11 14:54:35.477772] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:42.662 [2024-12-11 14:54:35.477778] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:42.662 [2024-12-11 14:54:35.478647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:42.662 [2024-12-11 14:54:35.478656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:42.662 [2024-12-11 14:54:35.478662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:42.662 [2024-12-11 14:54:35.479653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:42.662 [2024-12-11 14:54:35.479661] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:42.662 [2024-12-11 14:54:35.479668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.480658] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:42.662 [2024-12-11 14:54:35.480666] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.481669] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:42.662 [2024-12-11 14:54:35.481677] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:42.662 [2024-12-11 14:54:35.481682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.481688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.481797] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:42.662 [2024-12-11 14:54:35.481801] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.481806] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:42.662 [2024-12-11 14:54:35.482673] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:42.662 [2024-12-11 14:54:35.483677] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:42.662 [2024-12-11 14:54:35.484685] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:42.662 [2024-12-11 14:54:35.485683] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:42.662 [2024-12-11 14:54:35.485747] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:42.662 [2024-12-11 14:54:35.486702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:42.662 [2024-12-11 14:54:35.486710] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:42.662 [2024-12-11 14:54:35.486714] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:42.662 [2024-12-11 14:54:35.486731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:42.662 [2024-12-11 14:54:35.486738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:42.662 [2024-12-11 14:54:35.486749] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.662 [2024-12-11 14:54:35.486754] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.662 [2024-12-11 14:54:35.486757] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.662 [2024-12-11 14:54:35.486769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.662 [2024-12-11 14:54:35.486812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:42.662 [2024-12-11 14:54:35.486820] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:42.662 [2024-12-11 14:54:35.486827] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:42.662 [2024-12-11 14:54:35.486831] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:42.663 [2024-12-11 14:54:35.486835] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:42.663 [2024-12-11 14:54:35.486840] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:42.663 [2024-12-11 14:54:35.486844] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:42.663 [2024-12-11 14:54:35.486849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486857] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.486894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.663 [2024-12-11 14:54:35.486901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.663 [2024-12-11 14:54:35.486909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.663 [2024-12-11 14:54:35.486917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:42.663 [2024-12-11 14:54:35.486921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486937] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.486945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.486950] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:42.663 [2024-12-11 14:54:35.486955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.486974] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.486983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487050] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:42.663 [2024-12-11 14:54:35.487054] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:42.663 [2024-12-11 14:54:35.487057] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487083] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:42.663 [2024-12-11 14:54:35.487094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487107] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.663 [2024-12-11 14:54:35.487111] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.663 [2024-12-11 14:54:35.487115] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:42.663 [2024-12-11 14:54:35.487176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.663 [2024-12-11 14:54:35.487179] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487242] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:42.663 [2024-12-11 14:54:35.487246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:42.663 [2024-12-11 14:54:35.487250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:42.663 [2024-12-11 14:54:35.487267] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487289] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487352] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:42.663 [2024-12-11 14:54:35.487357] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:42.663 [2024-12-11 14:54:35.487360] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:42.663 [2024-12-11 14:54:35.487363] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:42.663 [2024-12-11 14:54:35.487366] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:42.663 [2024-12-11 14:54:35.487372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:42.663 [2024-12-11 14:54:35.487379] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:42.663 [2024-12-11 14:54:35.487382] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:42.663 [2024-12-11 14:54:35.487385] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:42.663 [2024-12-11 14:54:35.487401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:42.663 [2024-12-11 14:54:35.487404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487416] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:42.663 [2024-12-11 14:54:35.487420] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:42.663 [2024-12-11 14:54:35.487426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:42.663 [2024-12-11 14:54:35.487431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:42.663 [2024-12-11 14:54:35.487437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:42.663 [2024-12-11 14:54:35.487457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:42.664 [2024-12-11 14:54:35.487463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:42.664 ===================================================== 00:13:42.664 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:42.664 ===================================================== 00:13:42.664 Controller Capabilities/Features 00:13:42.664 ================================ 00:13:42.664 Vendor ID: 4e58 00:13:42.664 Subsystem Vendor ID: 4e58 00:13:42.664 Serial Number: SPDK1 00:13:42.664 Model Number: SPDK bdev Controller 00:13:42.664 Firmware Version: 25.01 00:13:42.664 Recommended Arb Burst: 6 00:13:42.664 IEEE OUI Identifier: 8d 6b 50 00:13:42.664 Multi-path I/O 00:13:42.664 May have multiple subsystem ports: Yes 00:13:42.664 May have multiple controllers: Yes 00:13:42.664 Associated with SR-IOV VF: No 00:13:42.664 Max Data Transfer Size: 131072 00:13:42.664 Max Number of Namespaces: 32 00:13:42.664 Max Number of I/O Queues: 127 00:13:42.664 NVMe Specification Version (VS): 1.3 00:13:42.664 NVMe Specification Version (Identify): 1.3 00:13:42.664 Maximum Queue Entries: 256 00:13:42.664 Contiguous Queues Required: Yes 00:13:42.664 Arbitration Mechanisms Supported 00:13:42.664 Weighted Round Robin: Not Supported 00:13:42.664 Vendor Specific: Not Supported 00:13:42.664 Reset Timeout: 15000 ms 00:13:42.664 Doorbell Stride: 4 bytes 00:13:42.664 NVM Subsystem Reset: Not Supported 00:13:42.664 Command Sets Supported 00:13:42.664 NVM Command Set: Supported 00:13:42.664 Boot Partition: Not Supported 00:13:42.664 Memory Page Size Minimum: 4096 bytes 00:13:42.664 Memory Page Size Maximum: 4096 bytes 00:13:42.664 Persistent Memory Region: Not Supported 00:13:42.664 Optional Asynchronous Events Supported 00:13:42.664 Namespace Attribute Notices: Supported 00:13:42.664 Firmware Activation Notices: Not Supported 00:13:42.664 ANA Change Notices: Not Supported 00:13:42.664 PLE Aggregate Log Change Notices: Not Supported 00:13:42.664 LBA Status Info Alert Notices: Not Supported 00:13:42.664 EGE Aggregate Log Change Notices: Not Supported 00:13:42.664 Normal NVM Subsystem Shutdown event: Not Supported 00:13:42.664 Zone Descriptor Change Notices: Not Supported 00:13:42.664 Discovery Log Change Notices: Not Supported 00:13:42.664 Controller Attributes 00:13:42.664 128-bit Host Identifier: Supported 00:13:42.664 Non-Operational Permissive Mode: Not Supported 00:13:42.664 NVM Sets: Not Supported 00:13:42.664 Read Recovery Levels: Not Supported 00:13:42.664 Endurance Groups: Not Supported 00:13:42.664 Predictable Latency Mode: Not Supported 00:13:42.664 Traffic Based Keep ALive: Not Supported 00:13:42.664 Namespace Granularity: Not Supported 00:13:42.664 SQ Associations: Not Supported 00:13:42.664 UUID List: Not Supported 00:13:42.664 Multi-Domain Subsystem: Not Supported 00:13:42.664 Fixed Capacity Management: Not Supported 00:13:42.664 Variable Capacity Management: Not Supported 00:13:42.664 Delete Endurance Group: Not Supported 00:13:42.664 Delete NVM Set: Not Supported 00:13:42.664 Extended LBA Formats Supported: Not Supported 00:13:42.664 Flexible Data Placement Supported: Not Supported 00:13:42.664 00:13:42.664 Controller Memory Buffer Support 00:13:42.664 ================================ 00:13:42.664 Supported: No 00:13:42.664 00:13:42.664 Persistent Memory Region Support 00:13:42.664 ================================ 00:13:42.664 Supported: No 00:13:42.664 00:13:42.664 Admin Command Set Attributes 00:13:42.664 ============================ 00:13:42.664 Security Send/Receive: Not Supported 00:13:42.664 Format NVM: Not Supported 00:13:42.664 Firmware Activate/Download: Not Supported 00:13:42.664 Namespace Management: Not Supported 00:13:42.664 Device Self-Test: Not Supported 00:13:42.664 Directives: Not Supported 00:13:42.664 NVMe-MI: Not Supported 00:13:42.664 Virtualization Management: Not Supported 00:13:42.664 Doorbell Buffer Config: Not Supported 00:13:42.664 Get LBA Status Capability: Not Supported 00:13:42.664 Command & Feature Lockdown Capability: Not Supported 00:13:42.664 Abort Command Limit: 4 00:13:42.664 Async Event Request Limit: 4 00:13:42.664 Number of Firmware Slots: N/A 00:13:42.664 Firmware Slot 1 Read-Only: N/A 00:13:42.664 Firmware Activation Without Reset: N/A 00:13:42.664 Multiple Update Detection Support: N/A 00:13:42.664 Firmware Update Granularity: No Information Provided 00:13:42.664 Per-Namespace SMART Log: No 00:13:42.664 Asymmetric Namespace Access Log Page: Not Supported 00:13:42.664 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:42.664 Command Effects Log Page: Supported 00:13:42.664 Get Log Page Extended Data: Supported 00:13:42.664 Telemetry Log Pages: Not Supported 00:13:42.664 Persistent Event Log Pages: Not Supported 00:13:42.664 Supported Log Pages Log Page: May Support 00:13:42.664 Commands Supported & Effects Log Page: Not Supported 00:13:42.664 Feature Identifiers & Effects Log Page:May Support 00:13:42.664 NVMe-MI Commands & Effects Log Page: May Support 00:13:42.664 Data Area 4 for Telemetry Log: Not Supported 00:13:42.664 Error Log Page Entries Supported: 128 00:13:42.664 Keep Alive: Supported 00:13:42.664 Keep Alive Granularity: 10000 ms 00:13:42.664 00:13:42.664 NVM Command Set Attributes 00:13:42.664 ========================== 00:13:42.664 Submission Queue Entry Size 00:13:42.664 Max: 64 00:13:42.664 Min: 64 00:13:42.664 Completion Queue Entry Size 00:13:42.664 Max: 16 00:13:42.664 Min: 16 00:13:42.664 Number of Namespaces: 32 00:13:42.664 Compare Command: Supported 00:13:42.664 Write Uncorrectable Command: Not Supported 00:13:42.664 Dataset Management Command: Supported 00:13:42.664 Write Zeroes Command: Supported 00:13:42.664 Set Features Save Field: Not Supported 00:13:42.664 Reservations: Not Supported 00:13:42.664 Timestamp: Not Supported 00:13:42.664 Copy: Supported 00:13:42.664 Volatile Write Cache: Present 00:13:42.664 Atomic Write Unit (Normal): 1 00:13:42.664 Atomic Write Unit (PFail): 1 00:13:42.664 Atomic Compare & Write Unit: 1 00:13:42.664 Fused Compare & Write: Supported 00:13:42.664 Scatter-Gather List 00:13:42.664 SGL Command Set: Supported (Dword aligned) 00:13:42.664 SGL Keyed: Not Supported 00:13:42.664 SGL Bit Bucket Descriptor: Not Supported 00:13:42.664 SGL Metadata Pointer: Not Supported 00:13:42.664 Oversized SGL: Not Supported 00:13:42.664 SGL Metadata Address: Not Supported 00:13:42.664 SGL Offset: Not Supported 00:13:42.664 Transport SGL Data Block: Not Supported 00:13:42.664 Replay Protected Memory Block: Not Supported 00:13:42.664 00:13:42.664 Firmware Slot Information 00:13:42.664 ========================= 00:13:42.664 Active slot: 1 00:13:42.664 Slot 1 Firmware Revision: 25.01 00:13:42.664 00:13:42.664 00:13:42.664 Commands Supported and Effects 00:13:42.664 ============================== 00:13:42.664 Admin Commands 00:13:42.664 -------------- 00:13:42.664 Get Log Page (02h): Supported 00:13:42.664 Identify (06h): Supported 00:13:42.664 Abort (08h): Supported 00:13:42.664 Set Features (09h): Supported 00:13:42.664 Get Features (0Ah): Supported 00:13:42.664 Asynchronous Event Request (0Ch): Supported 00:13:42.664 Keep Alive (18h): Supported 00:13:42.664 I/O Commands 00:13:42.664 ------------ 00:13:42.664 Flush (00h): Supported LBA-Change 00:13:42.664 Write (01h): Supported LBA-Change 00:13:42.664 Read (02h): Supported 00:13:42.664 Compare (05h): Supported 00:13:42.664 Write Zeroes (08h): Supported LBA-Change 00:13:42.664 Dataset Management (09h): Supported LBA-Change 00:13:42.664 Copy (19h): Supported LBA-Change 00:13:42.664 00:13:42.664 Error Log 00:13:42.664 ========= 00:13:42.664 00:13:42.664 Arbitration 00:13:42.664 =========== 00:13:42.664 Arbitration Burst: 1 00:13:42.664 00:13:42.664 Power Management 00:13:42.664 ================ 00:13:42.664 Number of Power States: 1 00:13:42.664 Current Power State: Power State #0 00:13:42.664 Power State #0: 00:13:42.664 Max Power: 0.00 W 00:13:42.664 Non-Operational State: Operational 00:13:42.664 Entry Latency: Not Reported 00:13:42.664 Exit Latency: Not Reported 00:13:42.664 Relative Read Throughput: 0 00:13:42.664 Relative Read Latency: 0 00:13:42.664 Relative Write Throughput: 0 00:13:42.664 Relative Write Latency: 0 00:13:42.664 Idle Power: Not Reported 00:13:42.664 Active Power: Not Reported 00:13:42.664 Non-Operational Permissive Mode: Not Supported 00:13:42.664 00:13:42.664 Health Information 00:13:42.664 ================== 00:13:42.664 Critical Warnings: 00:13:42.664 Available Spare Space: OK 00:13:42.664 Temperature: OK 00:13:42.664 Device Reliability: OK 00:13:42.664 Read Only: No 00:13:42.664 Volatile Memory Backup: OK 00:13:42.664 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:42.664 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:42.664 Available Spare: 0% 00:13:42.664 Available Sp[2024-12-11 14:54:35.487543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:42.664 [2024-12-11 14:54:35.487553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:42.664 [2024-12-11 14:54:35.487576] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:42.665 [2024-12-11 14:54:35.487585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.665 [2024-12-11 14:54:35.487591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.665 [2024-12-11 14:54:35.487596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.665 [2024-12-11 14:54:35.487602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:42.665 [2024-12-11 14:54:35.487705] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:42.665 [2024-12-11 14:54:35.487714] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:42.665 [2024-12-11 14:54:35.488713] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:42.665 [2024-12-11 14:54:35.488763] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:42.665 [2024-12-11 14:54:35.488769] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:42.665 [2024-12-11 14:54:35.489715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:42.665 [2024-12-11 14:54:35.489726] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:42.665 [2024-12-11 14:54:35.489779] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:42.665 [2024-12-11 14:54:35.491744] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:42.665 are Threshold: 0% 00:13:42.665 Life Percentage Used: 0% 00:13:42.665 Data Units Read: 0 00:13:42.665 Data Units Written: 0 00:13:42.665 Host Read Commands: 0 00:13:42.665 Host Write Commands: 0 00:13:42.665 Controller Busy Time: 0 minutes 00:13:42.665 Power Cycles: 0 00:13:42.665 Power On Hours: 0 hours 00:13:42.665 Unsafe Shutdowns: 0 00:13:42.665 Unrecoverable Media Errors: 0 00:13:42.665 Lifetime Error Log Entries: 0 00:13:42.665 Warning Temperature Time: 0 minutes 00:13:42.665 Critical Temperature Time: 0 minutes 00:13:42.665 00:13:42.665 Number of Queues 00:13:42.665 ================ 00:13:42.665 Number of I/O Submission Queues: 127 00:13:42.665 Number of I/O Completion Queues: 127 00:13:42.665 00:13:42.665 Active Namespaces 00:13:42.665 ================= 00:13:42.665 Namespace ID:1 00:13:42.665 Error Recovery Timeout: Unlimited 00:13:42.665 Command Set Identifier: NVM (00h) 00:13:42.665 Deallocate: Supported 00:13:42.665 Deallocated/Unwritten Error: Not Supported 00:13:42.665 Deallocated Read Value: Unknown 00:13:42.665 Deallocate in Write Zeroes: Not Supported 00:13:42.665 Deallocated Guard Field: 0xFFFF 00:13:42.665 Flush: Supported 00:13:42.665 Reservation: Supported 00:13:42.665 Namespace Sharing Capabilities: Multiple Controllers 00:13:42.665 Size (in LBAs): 131072 (0GiB) 00:13:42.665 Capacity (in LBAs): 131072 (0GiB) 00:13:42.665 Utilization (in LBAs): 131072 (0GiB) 00:13:42.665 NGUID: BA15968107FC4172AAE0A3288E68C95F 00:13:42.665 UUID: ba159681-07fc-4172-aae0-a3288e68c95f 00:13:42.665 Thin Provisioning: Not Supported 00:13:42.665 Per-NS Atomic Units: Yes 00:13:42.665 Atomic Boundary Size (Normal): 0 00:13:42.665 Atomic Boundary Size (PFail): 0 00:13:42.665 Atomic Boundary Offset: 0 00:13:42.665 Maximum Single Source Range Length: 65535 00:13:42.665 Maximum Copy Length: 65535 00:13:42.665 Maximum Source Range Count: 1 00:13:42.665 NGUID/EUI64 Never Reused: No 00:13:42.665 Namespace Write Protected: No 00:13:42.665 Number of LBA Formats: 1 00:13:42.665 Current LBA Format: LBA Format #00 00:13:42.665 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:42.665 00:13:42.665 14:54:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:42.924 [2024-12-11 14:54:35.727020] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:48.198 Initializing NVMe Controllers 00:13:48.198 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:48.198 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:48.198 Initialization complete. Launching workers. 00:13:48.198 ======================================================== 00:13:48.198 Latency(us) 00:13:48.198 Device Information : IOPS MiB/s Average min max 00:13:48.198 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39887.20 155.81 3209.62 984.66 9209.86 00:13:48.198 ======================================================== 00:13:48.198 Total : 39887.20 155.81 3209.62 984.66 9209.86 00:13:48.198 00:13:48.198 [2024-12-11 14:54:40.748746] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:48.198 14:54:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:48.198 [2024-12-11 14:54:40.985859] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:53.469 Initializing NVMe Controllers 00:13:53.469 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:53.469 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:53.469 Initialization complete. Launching workers. 00:13:53.469 ======================================================== 00:13:53.469 Latency(us) 00:13:53.469 Device Information : IOPS MiB/s Average min max 00:13:53.469 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.50 7581.77 10974.46 00:13:53.469 ======================================================== 00:13:53.469 Total : 16051.20 62.70 7984.50 7581.77 10974.46 00:13:53.469 00:13:53.469 [2024-12-11 14:54:46.023068] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:53.469 14:54:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:53.469 [2024-12-11 14:54:46.238096] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:58.744 [2024-12-11 14:54:51.309431] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:58.744 Initializing NVMe Controllers 00:13:58.744 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:58.744 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:58.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:58.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:58.744 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:58.744 Initialization complete. Launching workers. 00:13:58.744 Starting thread on core 2 00:13:58.744 Starting thread on core 3 00:13:58.744 Starting thread on core 1 00:13:58.744 14:54:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:58.744 [2024-12-11 14:54:51.615592] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.032 [2024-12-11 14:54:54.697754] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.032 Initializing NVMe Controllers 00:14:02.032 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.032 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.032 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:02.032 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:02.032 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:02.032 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:02.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:14:02.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:02.032 Initialization complete. Launching workers. 00:14:02.032 Starting thread on core 1 with urgent priority queue 00:14:02.032 Starting thread on core 2 with urgent priority queue 00:14:02.032 Starting thread on core 3 with urgent priority queue 00:14:02.032 Starting thread on core 0 with urgent priority queue 00:14:02.032 SPDK bdev Controller (SPDK1 ) core 0: 9274.00 IO/s 10.78 secs/100000 ios 00:14:02.032 SPDK bdev Controller (SPDK1 ) core 1: 8162.00 IO/s 12.25 secs/100000 ios 00:14:02.032 SPDK bdev Controller (SPDK1 ) core 2: 8277.33 IO/s 12.08 secs/100000 ios 00:14:02.032 SPDK bdev Controller (SPDK1 ) core 3: 10351.67 IO/s 9.66 secs/100000 ios 00:14:02.032 ======================================================== 00:14:02.032 00:14:02.032 14:54:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.032 [2024-12-11 14:54:54.986593] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:02.032 Initializing NVMe Controllers 00:14:02.032 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.032 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:02.032 Namespace ID: 1 size: 0GB 00:14:02.032 Initialization complete. 00:14:02.032 INFO: using host memory buffer for IO 00:14:02.032 Hello world! 00:14:02.032 [2024-12-11 14:54:55.020812] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:02.032 14:54:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:02.295 [2024-12-11 14:54:55.310612] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.682 Initializing NVMe Controllers 00:14:03.682 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.682 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:03.682 Initialization complete. Launching workers. 00:14:03.682 submit (in ns) avg, min, max = 6760.6, 3207.8, 4995850.4 00:14:03.682 complete (in ns) avg, min, max = 21112.5, 1799.1, 6988077.4 00:14:03.682 00:14:03.682 Submit histogram 00:14:03.682 ================ 00:14:03.682 Range in us Cumulative Count 00:14:03.682 3.200 - 3.214: 0.0187% ( 3) 00:14:03.682 3.214 - 3.228: 0.0499% ( 5) 00:14:03.682 3.228 - 3.242: 0.0748% ( 4) 00:14:03.682 3.242 - 3.256: 0.0873% ( 2) 00:14:03.682 3.256 - 3.270: 0.1621% ( 12) 00:14:03.682 3.270 - 3.283: 0.5986% ( 70) 00:14:03.682 3.283 - 3.297: 2.4567% ( 298) 00:14:03.682 3.297 - 3.311: 5.2687% ( 451) 00:14:03.682 3.311 - 3.325: 8.8477% ( 574) 00:14:03.682 3.325 - 3.339: 13.9481% ( 818) 00:14:03.682 3.339 - 3.353: 19.9339% ( 960) 00:14:03.682 3.353 - 3.367: 25.7950% ( 940) 00:14:03.682 3.367 - 3.381: 31.3505% ( 891) 00:14:03.682 3.381 - 3.395: 37.2428% ( 945) 00:14:03.682 3.395 - 3.409: 42.2559% ( 804) 00:14:03.682 3.409 - 3.423: 47.0819% ( 774) 00:14:03.682 3.423 - 3.437: 52.3257% ( 841) 00:14:03.682 3.437 - 3.450: 58.0496% ( 918) 00:14:03.682 3.450 - 3.464: 62.5701% ( 725) 00:14:03.682 3.464 - 3.478: 68.2255% ( 907) 00:14:03.682 3.478 - 3.492: 73.7124% ( 880) 00:14:03.682 3.492 - 3.506: 77.5097% ( 609) 00:14:03.682 3.506 - 3.520: 80.6834% ( 509) 00:14:03.682 3.520 - 3.534: 83.4705% ( 447) 00:14:03.682 3.534 - 3.548: 85.2164% ( 280) 00:14:03.682 3.548 - 3.562: 86.4634% ( 200) 00:14:03.682 3.562 - 3.590: 87.7977% ( 214) 00:14:03.682 3.590 - 3.617: 88.8265% ( 165) 00:14:03.682 3.617 - 3.645: 90.3230% ( 240) 00:14:03.682 3.645 - 3.673: 92.0252% ( 273) 00:14:03.682 3.673 - 3.701: 93.8770% ( 297) 00:14:03.682 3.701 - 3.729: 95.6478% ( 284) 00:14:03.682 3.729 - 3.757: 97.0508% ( 225) 00:14:03.682 3.757 - 3.784: 98.0609% ( 162) 00:14:03.682 3.784 - 3.812: 98.8153% ( 121) 00:14:03.682 3.812 - 3.840: 99.2455% ( 69) 00:14:03.682 3.840 - 3.868: 99.4825% ( 38) 00:14:03.682 3.868 - 3.896: 99.5511% ( 11) 00:14:03.682 3.896 - 3.923: 99.5947% ( 7) 00:14:03.682 3.923 - 3.951: 99.6009% ( 1) 00:14:03.682 3.951 - 3.979: 99.6072% ( 1) 00:14:03.682 4.285 - 4.313: 99.6134% ( 1) 00:14:03.682 4.981 - 5.009: 99.6197% ( 1) 00:14:03.682 5.148 - 5.176: 99.6259% ( 1) 00:14:03.682 5.176 - 5.203: 99.6321% ( 1) 00:14:03.682 5.231 - 5.259: 99.6384% ( 1) 00:14:03.682 5.287 - 5.315: 99.6446% ( 1) 00:14:03.682 5.370 - 5.398: 99.6508% ( 1) 00:14:03.682 5.426 - 5.454: 99.6571% ( 1) 00:14:03.682 5.482 - 5.510: 99.6695% ( 2) 00:14:03.682 5.510 - 5.537: 99.6882% ( 3) 00:14:03.682 5.649 - 5.677: 99.6945% ( 1) 00:14:03.682 5.677 - 5.704: 99.7007% ( 1) 00:14:03.682 5.704 - 5.732: 99.7069% ( 1) 00:14:03.682 5.732 - 5.760: 99.7194% ( 2) 00:14:03.682 5.788 - 5.816: 99.7319% ( 2) 00:14:03.682 5.843 - 5.871: 99.7381% ( 1) 00:14:03.682 5.899 - 5.927: 99.7444% ( 1) 00:14:03.682 6.094 - 6.122: 99.7506% ( 1) 00:14:03.682 6.233 - 6.261: 99.7568% ( 1) 00:14:03.682 6.261 - 6.289: 99.7631% ( 1) 00:14:03.682 6.289 - 6.317: 99.7693% ( 1) 00:14:03.682 6.317 - 6.344: 99.7818% ( 2) 00:14:03.682 6.344 - 6.372: 99.7880% ( 1) 00:14:03.682 6.400 - 6.428: 99.8005% ( 2) 00:14:03.682 6.623 - 6.650: 99.8067% ( 1) 00:14:03.682 6.734 - 6.762: 99.8129% ( 1) 00:14:03.682 6.762 - 6.790: 99.8192% ( 1) 00:14:03.682 6.817 - 6.845: 99.8254% ( 1) 00:14:03.682 6.901 - 6.929: 99.8316% ( 1) 00:14:03.682 6.929 - 6.957: 99.8379% ( 1) 00:14:03.682 6.957 - 6.984: 99.8441% ( 1) 00:14:03.682 7.179 - 7.235: 99.8628% ( 3) 00:14:03.682 7.235 - 7.290: 99.8691% ( 1) 00:14:03.682 7.346 - 7.402: 99.8753% ( 1) 00:14:03.682 7.513 - 7.569: 99.8878% ( 2) 00:14:03.682 7.680 - 7.736: 99.8940% ( 1) 00:14:03.682 7.736 - 7.791: 99.9002% ( 1) 00:14:03.682 7.791 - 7.847: 99.9065% ( 1) 00:14:03.682 [2024-12-11 14:54:56.332739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:03.682 8.070 - 8.125: 99.9127% ( 1) 00:14:03.682 41.405 - 41.628: 99.9189% ( 1) 00:14:03.682 3989.148 - 4017.642: 99.9938% ( 12) 00:14:03.682 4986.435 - 5014.929: 100.0000% ( 1) 00:14:03.682 00:14:03.683 Complete histogram 00:14:03.683 ================== 00:14:03.683 Range in us Cumulative Count 00:14:03.683 1.795 - 1.809: 0.0811% ( 13) 00:14:03.683 1.809 - 1.823: 0.6110% ( 85) 00:14:03.683 1.823 - 1.837: 0.9166% ( 49) 00:14:03.683 1.837 - 1.850: 1.2533% ( 54) 00:14:03.683 1.850 - 1.864: 14.8647% ( 2183) 00:14:03.683 1.864 - 1.878: 58.5235% ( 7002) 00:14:03.683 1.878 - 1.892: 85.1727% ( 4274) 00:14:03.683 1.892 - 1.906: 93.1725% ( 1283) 00:14:03.683 1.906 - 1.920: 95.4171% ( 360) 00:14:03.683 1.920 - 1.934: 96.3711% ( 153) 00:14:03.683 1.934 - 1.948: 97.6306% ( 202) 00:14:03.683 1.948 - 1.962: 98.6657% ( 166) 00:14:03.683 1.962 - 1.976: 99.0897% ( 68) 00:14:03.683 1.976 - 1.990: 99.2206% ( 21) 00:14:03.683 1.990 - 2.003: 99.2580% ( 6) 00:14:03.683 2.003 - 2.017: 99.2767% ( 3) 00:14:03.683 2.017 - 2.031: 99.2892% ( 2) 00:14:03.683 2.059 - 2.073: 99.2954% ( 1) 00:14:03.683 2.073 - 2.087: 99.3079% ( 2) 00:14:03.683 2.087 - 2.101: 99.3204% ( 2) 00:14:03.683 2.129 - 2.143: 99.3266% ( 1) 00:14:03.683 2.170 - 2.184: 99.3328% ( 1) 00:14:03.683 2.365 - 2.379: 99.3391% ( 1) 00:14:03.683 2.490 - 2.504: 99.3453% ( 1) 00:14:03.683 3.979 - 4.007: 99.3578% ( 2) 00:14:03.683 4.063 - 4.090: 99.3640% ( 1) 00:14:03.683 4.118 - 4.146: 99.3702% ( 1) 00:14:03.683 4.146 - 4.174: 99.3765% ( 1) 00:14:03.683 4.619 - 4.647: 99.3827% ( 1) 00:14:03.683 4.647 - 4.675: 99.3890% ( 1) 00:14:03.683 4.675 - 4.703: 99.3952% ( 1) 00:14:03.683 5.037 - 5.064: 99.4014% ( 1) 00:14:03.683 5.259 - 5.287: 99.4077% ( 1) 00:14:03.683 5.370 - 5.398: 99.4139% ( 1) 00:14:03.683 5.398 - 5.426: 99.4201% ( 1) 00:14:03.683 5.454 - 5.482: 99.4264% ( 1) 00:14:03.683 5.510 - 5.537: 99.4326% ( 1) 00:14:03.683 5.537 - 5.565: 99.4388% ( 1) 00:14:03.683 5.927 - 5.955: 99.4451% ( 1) 00:14:03.683 6.122 - 6.150: 99.4513% ( 1) 00:14:03.683 6.233 - 6.261: 99.4575% ( 1) 00:14:03.683 6.289 - 6.317: 99.4638% ( 1) 00:14:03.683 6.344 - 6.372: 99.4700% ( 1) 00:14:03.683 6.456 - 6.483: 99.4825% ( 2) 00:14:03.683 6.595 - 6.623: 99.4887% ( 1) 00:14:03.683 8.125 - 8.181: 99.4949% ( 1) 00:14:03.683 13.635 - 13.690: 99.5012% ( 1) 00:14:03.683 17.586 - 17.697: 99.5074% ( 1) 00:14:03.683 55.207 - 55.430: 99.5137% ( 1) 00:14:03.683 555.631 - 559.193: 99.5199% ( 1) 00:14:03.683 2137.043 - 2151.290: 99.5261% ( 1) 00:14:03.683 3034.602 - 3048.849: 99.5324% ( 1) 00:14:03.683 3989.148 - 4017.642: 99.9938% ( 74) 00:14:03.683 6981.009 - 7009.503: 100.0000% ( 1) 00:14:03.683 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:03.683 [ 00:14:03.683 { 00:14:03.683 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:03.683 "subtype": "Discovery", 00:14:03.683 "listen_addresses": [], 00:14:03.683 "allow_any_host": true, 00:14:03.683 "hosts": [] 00:14:03.683 }, 00:14:03.683 { 00:14:03.683 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:03.683 "subtype": "NVMe", 00:14:03.683 "listen_addresses": [ 00:14:03.683 { 00:14:03.683 "trtype": "VFIOUSER", 00:14:03.683 "adrfam": "IPv4", 00:14:03.683 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:03.683 "trsvcid": "0" 00:14:03.683 } 00:14:03.683 ], 00:14:03.683 "allow_any_host": true, 00:14:03.683 "hosts": [], 00:14:03.683 "serial_number": "SPDK1", 00:14:03.683 "model_number": "SPDK bdev Controller", 00:14:03.683 "max_namespaces": 32, 00:14:03.683 "min_cntlid": 1, 00:14:03.683 "max_cntlid": 65519, 00:14:03.683 "namespaces": [ 00:14:03.683 { 00:14:03.683 "nsid": 1, 00:14:03.683 "bdev_name": "Malloc1", 00:14:03.683 "name": "Malloc1", 00:14:03.683 "nguid": "BA15968107FC4172AAE0A3288E68C95F", 00:14:03.683 "uuid": "ba159681-07fc-4172-aae0-a3288e68c95f" 00:14:03.683 } 00:14:03.683 ] 00:14:03.683 }, 00:14:03.683 { 00:14:03.683 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:03.683 "subtype": "NVMe", 00:14:03.683 "listen_addresses": [ 00:14:03.683 { 00:14:03.683 "trtype": "VFIOUSER", 00:14:03.683 "adrfam": "IPv4", 00:14:03.683 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:03.683 "trsvcid": "0" 00:14:03.683 } 00:14:03.683 ], 00:14:03.683 "allow_any_host": true, 00:14:03.683 "hosts": [], 00:14:03.683 "serial_number": "SPDK2", 00:14:03.683 "model_number": "SPDK bdev Controller", 00:14:03.683 "max_namespaces": 32, 00:14:03.683 "min_cntlid": 1, 00:14:03.683 "max_cntlid": 65519, 00:14:03.683 "namespaces": [ 00:14:03.683 { 00:14:03.683 "nsid": 1, 00:14:03.683 "bdev_name": "Malloc2", 00:14:03.683 "name": "Malloc2", 00:14:03.683 "nguid": "FB9AF4D989734FE7B8A35AE757A565E9", 00:14:03.683 "uuid": "fb9af4d9-8973-4fe7-b8a3-5ae757a565e9" 00:14:03.683 } 00:14:03.683 ] 00:14:03.683 } 00:14:03.683 ] 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3077584 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:03.683 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:03.942 [2024-12-11 14:54:56.742558] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:03.942 Malloc3 00:14:03.942 14:54:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:03.942 [2024-12-11 14:54:56.979387] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:04.201 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:04.201 Asynchronous Event Request test 00:14:04.201 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.201 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:04.201 Registering asynchronous event callbacks... 00:14:04.201 Starting namespace attribute notice tests for all controllers... 00:14:04.201 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:04.201 aer_cb - Changed Namespace 00:14:04.201 Cleaning up... 00:14:04.201 [ 00:14:04.201 { 00:14:04.201 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:04.201 "subtype": "Discovery", 00:14:04.201 "listen_addresses": [], 00:14:04.201 "allow_any_host": true, 00:14:04.201 "hosts": [] 00:14:04.201 }, 00:14:04.201 { 00:14:04.201 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:04.201 "subtype": "NVMe", 00:14:04.201 "listen_addresses": [ 00:14:04.201 { 00:14:04.201 "trtype": "VFIOUSER", 00:14:04.201 "adrfam": "IPv4", 00:14:04.201 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:04.201 "trsvcid": "0" 00:14:04.201 } 00:14:04.201 ], 00:14:04.201 "allow_any_host": true, 00:14:04.201 "hosts": [], 00:14:04.201 "serial_number": "SPDK1", 00:14:04.201 "model_number": "SPDK bdev Controller", 00:14:04.201 "max_namespaces": 32, 00:14:04.201 "min_cntlid": 1, 00:14:04.201 "max_cntlid": 65519, 00:14:04.201 "namespaces": [ 00:14:04.201 { 00:14:04.201 "nsid": 1, 00:14:04.201 "bdev_name": "Malloc1", 00:14:04.201 "name": "Malloc1", 00:14:04.201 "nguid": "BA15968107FC4172AAE0A3288E68C95F", 00:14:04.201 "uuid": "ba159681-07fc-4172-aae0-a3288e68c95f" 00:14:04.201 }, 00:14:04.201 { 00:14:04.201 "nsid": 2, 00:14:04.201 "bdev_name": "Malloc3", 00:14:04.201 "name": "Malloc3", 00:14:04.201 "nguid": "FA22B4BE875443E7A13B68C634A0B76E", 00:14:04.201 "uuid": "fa22b4be-8754-43e7-a13b-68c634a0b76e" 00:14:04.201 } 00:14:04.201 ] 00:14:04.201 }, 00:14:04.201 { 00:14:04.201 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:04.201 "subtype": "NVMe", 00:14:04.201 "listen_addresses": [ 00:14:04.201 { 00:14:04.201 "trtype": "VFIOUSER", 00:14:04.201 "adrfam": "IPv4", 00:14:04.201 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:04.201 "trsvcid": "0" 00:14:04.201 } 00:14:04.201 ], 00:14:04.201 "allow_any_host": true, 00:14:04.201 "hosts": [], 00:14:04.201 "serial_number": "SPDK2", 00:14:04.201 "model_number": "SPDK bdev Controller", 00:14:04.201 "max_namespaces": 32, 00:14:04.201 "min_cntlid": 1, 00:14:04.201 "max_cntlid": 65519, 00:14:04.201 "namespaces": [ 00:14:04.201 { 00:14:04.201 "nsid": 1, 00:14:04.201 "bdev_name": "Malloc2", 00:14:04.201 "name": "Malloc2", 00:14:04.202 "nguid": "FB9AF4D989734FE7B8A35AE757A565E9", 00:14:04.202 "uuid": "fb9af4d9-8973-4fe7-b8a3-5ae757a565e9" 00:14:04.202 } 00:14:04.202 ] 00:14:04.202 } 00:14:04.202 ] 00:14:04.202 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3077584 00:14:04.202 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.202 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:04.202 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:04.202 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:04.202 [2024-12-11 14:54:57.230948] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:04.202 [2024-12-11 14:54:57.230995] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3077595 ] 00:14:04.462 [2024-12-11 14:54:57.271942] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:04.462 [2024-12-11 14:54:57.276205] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.462 [2024-12-11 14:54:57.276229] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f66df0b7000 00:14:04.462 [2024-12-11 14:54:57.277203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.462 [2024-12-11 14:54:57.278210] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.462 [2024-12-11 14:54:57.279213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.462 [2024-12-11 14:54:57.280218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.463 [2024-12-11 14:54:57.281224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.463 [2024-12-11 14:54:57.282237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.463 [2024-12-11 14:54:57.283243] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:04.463 [2024-12-11 14:54:57.284247] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:04.463 [2024-12-11 14:54:57.285256] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:04.463 [2024-12-11 14:54:57.285267] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f66df0ac000 00:14:04.463 [2024-12-11 14:54:57.286208] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.463 [2024-12-11 14:54:57.295728] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:04.463 [2024-12-11 14:54:57.295756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:04.463 [2024-12-11 14:54:57.300834] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.463 [2024-12-11 14:54:57.300871] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:04.463 [2024-12-11 14:54:57.300944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:04.463 [2024-12-11 14:54:57.300957] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:04.463 [2024-12-11 14:54:57.300962] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:04.463 [2024-12-11 14:54:57.301840] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:04.463 [2024-12-11 14:54:57.301850] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:04.463 [2024-12-11 14:54:57.301857] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:04.463 [2024-12-11 14:54:57.302845] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:04.463 [2024-12-11 14:54:57.302855] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:04.463 [2024-12-11 14:54:57.302861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.303853] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:04.463 [2024-12-11 14:54:57.303862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.304854] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:04.463 [2024-12-11 14:54:57.304863] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:04.463 [2024-12-11 14:54:57.304868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.304874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.304981] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:04.463 [2024-12-11 14:54:57.304988] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.304993] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:04.463 [2024-12-11 14:54:57.305872] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:04.463 [2024-12-11 14:54:57.306881] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:04.463 [2024-12-11 14:54:57.307897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:04.463 [2024-12-11 14:54:57.308899] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:04.463 [2024-12-11 14:54:57.308939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:04.463 [2024-12-11 14:54:57.309913] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:04.463 [2024-12-11 14:54:57.309922] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:04.463 [2024-12-11 14:54:57.309927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.309944] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:04.463 [2024-12-11 14:54:57.309954] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.309964] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.463 [2024-12-11 14:54:57.309969] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.463 [2024-12-11 14:54:57.309972] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.463 [2024-12-11 14:54:57.309983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.463 [2024-12-11 14:54:57.317164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:04.463 [2024-12-11 14:54:57.317177] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:04.463 [2024-12-11 14:54:57.317181] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:04.463 [2024-12-11 14:54:57.317186] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:04.463 [2024-12-11 14:54:57.317190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:04.463 [2024-12-11 14:54:57.317194] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:04.463 [2024-12-11 14:54:57.317199] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:04.463 [2024-12-11 14:54:57.317203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.317212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.317223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:04.463 [2024-12-11 14:54:57.325163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:04.463 [2024-12-11 14:54:57.325174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.463 [2024-12-11 14:54:57.325182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.463 [2024-12-11 14:54:57.325190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.463 [2024-12-11 14:54:57.325197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:04.463 [2024-12-11 14:54:57.325201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.325211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.325220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:04.463 [2024-12-11 14:54:57.333163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:04.463 [2024-12-11 14:54:57.333172] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:04.463 [2024-12-11 14:54:57.333177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.333183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.333188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.333196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.463 [2024-12-11 14:54:57.341173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:04.463 [2024-12-11 14:54:57.341226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.341236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.341243] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:04.463 [2024-12-11 14:54:57.341248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:04.463 [2024-12-11 14:54:57.341251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.463 [2024-12-11 14:54:57.341256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:04.463 [2024-12-11 14:54:57.349162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:04.463 [2024-12-11 14:54:57.349173] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:04.463 [2024-12-11 14:54:57.349181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.349188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:04.463 [2024-12-11 14:54:57.349196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.464 [2024-12-11 14:54:57.349200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.464 [2024-12-11 14:54:57.349204] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.464 [2024-12-11 14:54:57.349209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.357164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.357176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.357184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.357190] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:04.464 [2024-12-11 14:54:57.357194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.464 [2024-12-11 14:54:57.357197] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.464 [2024-12-11 14:54:57.357203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.365161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.365171] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365202] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:04.464 [2024-12-11 14:54:57.365207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:04.464 [2024-12-11 14:54:57.365211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:04.464 [2024-12-11 14:54:57.365227] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.373165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.373178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.381162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.381176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.389162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.389174] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.397162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.397177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:04.464 [2024-12-11 14:54:57.397181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:04.464 [2024-12-11 14:54:57.397184] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:04.464 [2024-12-11 14:54:57.397187] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:04.464 [2024-12-11 14:54:57.397190] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:04.464 [2024-12-11 14:54:57.397196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:04.464 [2024-12-11 14:54:57.397203] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:04.464 [2024-12-11 14:54:57.397206] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:04.464 [2024-12-11 14:54:57.397210] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.464 [2024-12-11 14:54:57.397215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.397221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:04.464 [2024-12-11 14:54:57.397225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:04.464 [2024-12-11 14:54:57.397228] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.464 [2024-12-11 14:54:57.397233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.397240] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:04.464 [2024-12-11 14:54:57.397243] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:04.464 [2024-12-11 14:54:57.397246] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:04.464 [2024-12-11 14:54:57.397252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:04.464 [2024-12-11 14:54:57.405165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.405180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.405189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:04.464 [2024-12-11 14:54:57.405197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:04.464 ===================================================== 00:14:04.464 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:04.464 ===================================================== 00:14:04.464 Controller Capabilities/Features 00:14:04.464 ================================ 00:14:04.464 Vendor ID: 4e58 00:14:04.464 Subsystem Vendor ID: 4e58 00:14:04.464 Serial Number: SPDK2 00:14:04.464 Model Number: SPDK bdev Controller 00:14:04.464 Firmware Version: 25.01 00:14:04.464 Recommended Arb Burst: 6 00:14:04.464 IEEE OUI Identifier: 8d 6b 50 00:14:04.464 Multi-path I/O 00:14:04.464 May have multiple subsystem ports: Yes 00:14:04.464 May have multiple controllers: Yes 00:14:04.464 Associated with SR-IOV VF: No 00:14:04.464 Max Data Transfer Size: 131072 00:14:04.464 Max Number of Namespaces: 32 00:14:04.464 Max Number of I/O Queues: 127 00:14:04.464 NVMe Specification Version (VS): 1.3 00:14:04.464 NVMe Specification Version (Identify): 1.3 00:14:04.464 Maximum Queue Entries: 256 00:14:04.464 Contiguous Queues Required: Yes 00:14:04.464 Arbitration Mechanisms Supported 00:14:04.464 Weighted Round Robin: Not Supported 00:14:04.464 Vendor Specific: Not Supported 00:14:04.464 Reset Timeout: 15000 ms 00:14:04.464 Doorbell Stride: 4 bytes 00:14:04.464 NVM Subsystem Reset: Not Supported 00:14:04.464 Command Sets Supported 00:14:04.464 NVM Command Set: Supported 00:14:04.464 Boot Partition: Not Supported 00:14:04.464 Memory Page Size Minimum: 4096 bytes 00:14:04.464 Memory Page Size Maximum: 4096 bytes 00:14:04.464 Persistent Memory Region: Not Supported 00:14:04.464 Optional Asynchronous Events Supported 00:14:04.464 Namespace Attribute Notices: Supported 00:14:04.464 Firmware Activation Notices: Not Supported 00:14:04.464 ANA Change Notices: Not Supported 00:14:04.464 PLE Aggregate Log Change Notices: Not Supported 00:14:04.464 LBA Status Info Alert Notices: Not Supported 00:14:04.464 EGE Aggregate Log Change Notices: Not Supported 00:14:04.464 Normal NVM Subsystem Shutdown event: Not Supported 00:14:04.464 Zone Descriptor Change Notices: Not Supported 00:14:04.464 Discovery Log Change Notices: Not Supported 00:14:04.464 Controller Attributes 00:14:04.464 128-bit Host Identifier: Supported 00:14:04.464 Non-Operational Permissive Mode: Not Supported 00:14:04.464 NVM Sets: Not Supported 00:14:04.464 Read Recovery Levels: Not Supported 00:14:04.464 Endurance Groups: Not Supported 00:14:04.464 Predictable Latency Mode: Not Supported 00:14:04.464 Traffic Based Keep ALive: Not Supported 00:14:04.464 Namespace Granularity: Not Supported 00:14:04.464 SQ Associations: Not Supported 00:14:04.464 UUID List: Not Supported 00:14:04.464 Multi-Domain Subsystem: Not Supported 00:14:04.464 Fixed Capacity Management: Not Supported 00:14:04.464 Variable Capacity Management: Not Supported 00:14:04.464 Delete Endurance Group: Not Supported 00:14:04.464 Delete NVM Set: Not Supported 00:14:04.464 Extended LBA Formats Supported: Not Supported 00:14:04.464 Flexible Data Placement Supported: Not Supported 00:14:04.464 00:14:04.464 Controller Memory Buffer Support 00:14:04.464 ================================ 00:14:04.464 Supported: No 00:14:04.464 00:14:04.464 Persistent Memory Region Support 00:14:04.464 ================================ 00:14:04.464 Supported: No 00:14:04.464 00:14:04.464 Admin Command Set Attributes 00:14:04.464 ============================ 00:14:04.464 Security Send/Receive: Not Supported 00:14:04.464 Format NVM: Not Supported 00:14:04.464 Firmware Activate/Download: Not Supported 00:14:04.464 Namespace Management: Not Supported 00:14:04.464 Device Self-Test: Not Supported 00:14:04.464 Directives: Not Supported 00:14:04.464 NVMe-MI: Not Supported 00:14:04.464 Virtualization Management: Not Supported 00:14:04.465 Doorbell Buffer Config: Not Supported 00:14:04.465 Get LBA Status Capability: Not Supported 00:14:04.465 Command & Feature Lockdown Capability: Not Supported 00:14:04.465 Abort Command Limit: 4 00:14:04.465 Async Event Request Limit: 4 00:14:04.465 Number of Firmware Slots: N/A 00:14:04.465 Firmware Slot 1 Read-Only: N/A 00:14:04.465 Firmware Activation Without Reset: N/A 00:14:04.465 Multiple Update Detection Support: N/A 00:14:04.465 Firmware Update Granularity: No Information Provided 00:14:04.465 Per-Namespace SMART Log: No 00:14:04.465 Asymmetric Namespace Access Log Page: Not Supported 00:14:04.465 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:04.465 Command Effects Log Page: Supported 00:14:04.465 Get Log Page Extended Data: Supported 00:14:04.465 Telemetry Log Pages: Not Supported 00:14:04.465 Persistent Event Log Pages: Not Supported 00:14:04.465 Supported Log Pages Log Page: May Support 00:14:04.465 Commands Supported & Effects Log Page: Not Supported 00:14:04.465 Feature Identifiers & Effects Log Page:May Support 00:14:04.465 NVMe-MI Commands & Effects Log Page: May Support 00:14:04.465 Data Area 4 for Telemetry Log: Not Supported 00:14:04.465 Error Log Page Entries Supported: 128 00:14:04.465 Keep Alive: Supported 00:14:04.465 Keep Alive Granularity: 10000 ms 00:14:04.465 00:14:04.465 NVM Command Set Attributes 00:14:04.465 ========================== 00:14:04.465 Submission Queue Entry Size 00:14:04.465 Max: 64 00:14:04.465 Min: 64 00:14:04.465 Completion Queue Entry Size 00:14:04.465 Max: 16 00:14:04.465 Min: 16 00:14:04.465 Number of Namespaces: 32 00:14:04.465 Compare Command: Supported 00:14:04.465 Write Uncorrectable Command: Not Supported 00:14:04.465 Dataset Management Command: Supported 00:14:04.465 Write Zeroes Command: Supported 00:14:04.465 Set Features Save Field: Not Supported 00:14:04.465 Reservations: Not Supported 00:14:04.465 Timestamp: Not Supported 00:14:04.465 Copy: Supported 00:14:04.465 Volatile Write Cache: Present 00:14:04.465 Atomic Write Unit (Normal): 1 00:14:04.465 Atomic Write Unit (PFail): 1 00:14:04.465 Atomic Compare & Write Unit: 1 00:14:04.465 Fused Compare & Write: Supported 00:14:04.465 Scatter-Gather List 00:14:04.465 SGL Command Set: Supported (Dword aligned) 00:14:04.465 SGL Keyed: Not Supported 00:14:04.465 SGL Bit Bucket Descriptor: Not Supported 00:14:04.465 SGL Metadata Pointer: Not Supported 00:14:04.465 Oversized SGL: Not Supported 00:14:04.465 SGL Metadata Address: Not Supported 00:14:04.465 SGL Offset: Not Supported 00:14:04.465 Transport SGL Data Block: Not Supported 00:14:04.465 Replay Protected Memory Block: Not Supported 00:14:04.465 00:14:04.465 Firmware Slot Information 00:14:04.465 ========================= 00:14:04.465 Active slot: 1 00:14:04.465 Slot 1 Firmware Revision: 25.01 00:14:04.465 00:14:04.465 00:14:04.465 Commands Supported and Effects 00:14:04.465 ============================== 00:14:04.465 Admin Commands 00:14:04.465 -------------- 00:14:04.465 Get Log Page (02h): Supported 00:14:04.465 Identify (06h): Supported 00:14:04.465 Abort (08h): Supported 00:14:04.465 Set Features (09h): Supported 00:14:04.465 Get Features (0Ah): Supported 00:14:04.465 Asynchronous Event Request (0Ch): Supported 00:14:04.465 Keep Alive (18h): Supported 00:14:04.465 I/O Commands 00:14:04.465 ------------ 00:14:04.465 Flush (00h): Supported LBA-Change 00:14:04.465 Write (01h): Supported LBA-Change 00:14:04.465 Read (02h): Supported 00:14:04.465 Compare (05h): Supported 00:14:04.465 Write Zeroes (08h): Supported LBA-Change 00:14:04.465 Dataset Management (09h): Supported LBA-Change 00:14:04.465 Copy (19h): Supported LBA-Change 00:14:04.465 00:14:04.465 Error Log 00:14:04.465 ========= 00:14:04.465 00:14:04.465 Arbitration 00:14:04.465 =========== 00:14:04.465 Arbitration Burst: 1 00:14:04.465 00:14:04.465 Power Management 00:14:04.465 ================ 00:14:04.465 Number of Power States: 1 00:14:04.465 Current Power State: Power State #0 00:14:04.465 Power State #0: 00:14:04.465 Max Power: 0.00 W 00:14:04.465 Non-Operational State: Operational 00:14:04.465 Entry Latency: Not Reported 00:14:04.465 Exit Latency: Not Reported 00:14:04.465 Relative Read Throughput: 0 00:14:04.465 Relative Read Latency: 0 00:14:04.465 Relative Write Throughput: 0 00:14:04.465 Relative Write Latency: 0 00:14:04.465 Idle Power: Not Reported 00:14:04.465 Active Power: Not Reported 00:14:04.465 Non-Operational Permissive Mode: Not Supported 00:14:04.465 00:14:04.465 Health Information 00:14:04.465 ================== 00:14:04.465 Critical Warnings: 00:14:04.465 Available Spare Space: OK 00:14:04.465 Temperature: OK 00:14:04.465 Device Reliability: OK 00:14:04.465 Read Only: No 00:14:04.465 Volatile Memory Backup: OK 00:14:04.465 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:04.465 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:04.465 Available Spare: 0% 00:14:04.465 Available Sp[2024-12-11 14:54:57.405286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:04.465 [2024-12-11 14:54:57.413164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:04.465 [2024-12-11 14:54:57.413193] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:04.465 [2024-12-11 14:54:57.413203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.465 [2024-12-11 14:54:57.413209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.465 [2024-12-11 14:54:57.413215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.465 [2024-12-11 14:54:57.413221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:04.465 [2024-12-11 14:54:57.413260] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:04.465 [2024-12-11 14:54:57.413270] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:04.465 [2024-12-11 14:54:57.414261] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:04.465 [2024-12-11 14:54:57.414306] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:04.465 [2024-12-11 14:54:57.414312] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:04.465 [2024-12-11 14:54:57.415264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:04.465 [2024-12-11 14:54:57.415275] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:04.465 [2024-12-11 14:54:57.415324] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:04.465 [2024-12-11 14:54:57.416308] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:04.465 are Threshold: 0% 00:14:04.465 Life Percentage Used: 0% 00:14:04.465 Data Units Read: 0 00:14:04.465 Data Units Written: 0 00:14:04.465 Host Read Commands: 0 00:14:04.465 Host Write Commands: 0 00:14:04.465 Controller Busy Time: 0 minutes 00:14:04.465 Power Cycles: 0 00:14:04.465 Power On Hours: 0 hours 00:14:04.465 Unsafe Shutdowns: 0 00:14:04.465 Unrecoverable Media Errors: 0 00:14:04.465 Lifetime Error Log Entries: 0 00:14:04.465 Warning Temperature Time: 0 minutes 00:14:04.465 Critical Temperature Time: 0 minutes 00:14:04.465 00:14:04.465 Number of Queues 00:14:04.465 ================ 00:14:04.465 Number of I/O Submission Queues: 127 00:14:04.465 Number of I/O Completion Queues: 127 00:14:04.465 00:14:04.465 Active Namespaces 00:14:04.465 ================= 00:14:04.465 Namespace ID:1 00:14:04.465 Error Recovery Timeout: Unlimited 00:14:04.465 Command Set Identifier: NVM (00h) 00:14:04.465 Deallocate: Supported 00:14:04.465 Deallocated/Unwritten Error: Not Supported 00:14:04.465 Deallocated Read Value: Unknown 00:14:04.465 Deallocate in Write Zeroes: Not Supported 00:14:04.465 Deallocated Guard Field: 0xFFFF 00:14:04.465 Flush: Supported 00:14:04.465 Reservation: Supported 00:14:04.465 Namespace Sharing Capabilities: Multiple Controllers 00:14:04.465 Size (in LBAs): 131072 (0GiB) 00:14:04.465 Capacity (in LBAs): 131072 (0GiB) 00:14:04.465 Utilization (in LBAs): 131072 (0GiB) 00:14:04.465 NGUID: FB9AF4D989734FE7B8A35AE757A565E9 00:14:04.465 UUID: fb9af4d9-8973-4fe7-b8a3-5ae757a565e9 00:14:04.465 Thin Provisioning: Not Supported 00:14:04.465 Per-NS Atomic Units: Yes 00:14:04.465 Atomic Boundary Size (Normal): 0 00:14:04.465 Atomic Boundary Size (PFail): 0 00:14:04.465 Atomic Boundary Offset: 0 00:14:04.465 Maximum Single Source Range Length: 65535 00:14:04.465 Maximum Copy Length: 65535 00:14:04.465 Maximum Source Range Count: 1 00:14:04.465 NGUID/EUI64 Never Reused: No 00:14:04.465 Namespace Write Protected: No 00:14:04.465 Number of LBA Formats: 1 00:14:04.465 Current LBA Format: LBA Format #00 00:14:04.465 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:04.465 00:14:04.465 14:54:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:04.724 [2024-12-11 14:54:57.653601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:09.997 Initializing NVMe Controllers 00:14:09.997 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:09.997 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:09.997 Initialization complete. Launching workers. 00:14:09.997 ======================================================== 00:14:09.997 Latency(us) 00:14:09.997 Device Information : IOPS MiB/s Average min max 00:14:09.997 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39897.75 155.85 3208.05 1003.49 7568.35 00:14:09.997 ======================================================== 00:14:09.997 Total : 39897.75 155.85 3208.05 1003.49 7568.35 00:14:09.997 00:14:09.997 [2024-12-11 14:55:02.759419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:09.997 14:55:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:09.997 [2024-12-11 14:55:02.998134] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:15.271 Initializing NVMe Controllers 00:14:15.271 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:15.271 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:15.271 Initialization complete. Launching workers. 00:14:15.271 ======================================================== 00:14:15.271 Latency(us) 00:14:15.271 Device Information : IOPS MiB/s Average min max 00:14:15.271 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39920.00 155.94 3206.23 987.04 9568.15 00:14:15.271 ======================================================== 00:14:15.271 Total : 39920.00 155.94 3206.23 987.04 9568.15 00:14:15.271 00:14:15.271 [2024-12-11 14:55:08.016286] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:15.271 14:55:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:15.271 [2024-12-11 14:55:08.223712] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:20.545 [2024-12-11 14:55:13.359258] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:20.545 Initializing NVMe Controllers 00:14:20.545 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:20.545 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:20.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:20.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:20.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:20.545 Initialization complete. Launching workers. 00:14:20.545 Starting thread on core 2 00:14:20.545 Starting thread on core 3 00:14:20.545 Starting thread on core 1 00:14:20.545 14:55:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:20.804 [2024-12-11 14:55:13.656601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.093 [2024-12-11 14:55:16.705201] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.093 Initializing NVMe Controllers 00:14:24.093 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.093 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:24.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:24.093 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:24.094 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:24.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:14:24.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:24.094 Initialization complete. Launching workers. 00:14:24.094 Starting thread on core 1 with urgent priority queue 00:14:24.094 Starting thread on core 2 with urgent priority queue 00:14:24.094 Starting thread on core 3 with urgent priority queue 00:14:24.094 Starting thread on core 0 with urgent priority queue 00:14:24.094 SPDK bdev Controller (SPDK2 ) core 0: 7919.67 IO/s 12.63 secs/100000 ios 00:14:24.094 SPDK bdev Controller (SPDK2 ) core 1: 8901.33 IO/s 11.23 secs/100000 ios 00:14:24.094 SPDK bdev Controller (SPDK2 ) core 2: 9951.00 IO/s 10.05 secs/100000 ios 00:14:24.094 SPDK bdev Controller (SPDK2 ) core 3: 7762.67 IO/s 12.88 secs/100000 ios 00:14:24.094 ======================================================== 00:14:24.094 00:14:24.094 14:55:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.094 [2024-12-11 14:55:16.996615] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:24.094 Initializing NVMe Controllers 00:14:24.094 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.094 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:24.094 Namespace ID: 1 size: 0GB 00:14:24.094 Initialization complete. 00:14:24.094 INFO: using host memory buffer for IO 00:14:24.094 Hello world! 00:14:24.094 [2024-12-11 14:55:17.008695] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:24.094 14:55:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:24.353 [2024-12-11 14:55:17.296115] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:25.731 Initializing NVMe Controllers 00:14:25.731 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.731 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:25.731 Initialization complete. Launching workers. 00:14:25.731 submit (in ns) avg, min, max = 7038.5, 3223.5, 3999556.5 00:14:25.731 complete (in ns) avg, min, max = 23160.9, 1774.8, 4000961.7 00:14:25.731 00:14:25.731 Submit histogram 00:14:25.731 ================ 00:14:25.731 Range in us Cumulative Count 00:14:25.731 3.214 - 3.228: 0.0063% ( 1) 00:14:25.731 3.242 - 3.256: 0.0444% ( 6) 00:14:25.731 3.256 - 3.270: 0.1460% ( 16) 00:14:25.731 3.270 - 3.283: 0.2348% ( 14) 00:14:25.731 3.283 - 3.297: 0.3174% ( 13) 00:14:25.731 3.297 - 3.311: 0.5141% ( 31) 00:14:25.731 3.311 - 3.325: 1.1869% ( 106) 00:14:25.731 3.325 - 3.339: 3.9099% ( 429) 00:14:25.731 3.339 - 3.353: 7.9848% ( 642) 00:14:25.731 3.353 - 3.367: 13.6528% ( 893) 00:14:25.731 3.367 - 3.381: 19.8477% ( 976) 00:14:25.731 3.381 - 3.395: 26.2647% ( 1011) 00:14:25.731 3.395 - 3.409: 31.3234% ( 797) 00:14:25.731 3.409 - 3.423: 36.9280% ( 883) 00:14:25.731 3.423 - 3.437: 42.1644% ( 825) 00:14:25.731 3.437 - 3.450: 46.5820% ( 696) 00:14:25.731 3.450 - 3.464: 50.5871% ( 631) 00:14:25.731 3.464 - 3.478: 55.2333% ( 732) 00:14:25.731 3.478 - 3.492: 61.5170% ( 990) 00:14:25.731 3.492 - 3.506: 66.8105% ( 834) 00:14:25.731 3.506 - 3.520: 71.6344% ( 760) 00:14:25.731 3.520 - 3.534: 76.6804% ( 795) 00:14:25.731 3.534 - 3.548: 80.9267% ( 669) 00:14:25.731 3.548 - 3.562: 84.0812% ( 497) 00:14:25.731 3.562 - 3.590: 87.0263% ( 464) 00:14:25.731 3.590 - 3.617: 87.8959% ( 137) 00:14:25.731 3.617 - 3.645: 89.0003% ( 174) 00:14:25.731 3.645 - 3.673: 90.6569% ( 261) 00:14:25.731 3.673 - 3.701: 92.3516% ( 267) 00:14:25.731 3.701 - 3.729: 94.0019% ( 260) 00:14:25.731 3.729 - 3.757: 95.5633% ( 246) 00:14:25.731 3.757 - 3.784: 97.0739% ( 238) 00:14:25.731 3.784 - 3.812: 98.1720% ( 173) 00:14:25.731 3.812 - 3.840: 98.7813% ( 96) 00:14:25.731 3.840 - 3.868: 99.1685% ( 61) 00:14:25.731 3.868 - 3.896: 99.4732% ( 48) 00:14:25.731 3.896 - 3.923: 99.5367% ( 10) 00:14:25.731 3.923 - 3.951: 99.5811% ( 7) 00:14:25.731 3.951 - 3.979: 99.5874% ( 1) 00:14:25.731 4.007 - 4.035: 99.5938% ( 1) 00:14:25.731 4.118 - 4.146: 99.6001% ( 1) 00:14:25.731 4.285 - 4.313: 99.6065% ( 1) 00:14:25.731 4.953 - 4.981: 99.6128% ( 1) 00:14:25.731 5.037 - 5.064: 99.6192% ( 1) 00:14:25.731 5.315 - 5.343: 99.6255% ( 1) 00:14:25.731 5.343 - 5.370: 99.6446% ( 3) 00:14:25.731 5.370 - 5.398: 99.6509% ( 1) 00:14:25.731 5.398 - 5.426: 99.6573% ( 1) 00:14:25.731 5.426 - 5.454: 99.6699% ( 2) 00:14:25.731 5.510 - 5.537: 99.6826% ( 2) 00:14:25.732 5.537 - 5.565: 99.6890% ( 1) 00:14:25.732 5.565 - 5.593: 99.6953% ( 1) 00:14:25.732 5.593 - 5.621: 99.7080% ( 2) 00:14:25.732 5.704 - 5.732: 99.7144% ( 1) 00:14:25.732 5.788 - 5.816: 99.7271% ( 2) 00:14:25.732 5.816 - 5.843: 99.7398% ( 2) 00:14:25.732 5.871 - 5.899: 99.7461% ( 1) 00:14:25.732 5.955 - 5.983: 99.7588% ( 2) 00:14:25.732 6.038 - 6.066: 99.7715% ( 2) 00:14:25.732 6.094 - 6.122: 99.7905% ( 3) 00:14:25.732 6.122 - 6.150: 99.8032% ( 2) 00:14:25.732 6.150 - 6.177: 99.8096% ( 1) 00:14:25.732 6.205 - 6.233: 99.8159% ( 1) 00:14:25.732 6.233 - 6.261: 99.8223% ( 1) 00:14:25.732 6.261 - 6.289: 99.8286% ( 1) 00:14:25.732 6.289 - 6.317: 99.8350% ( 1) 00:14:25.732 6.317 - 6.344: 99.8413% ( 1) 00:14:25.732 6.595 - 6.623: 99.8477% ( 1) 00:14:25.732 7.123 - 7.179: 99.8540% ( 1) 00:14:25.732 7.290 - 7.346: 99.8604% ( 1) 00:14:25.732 7.624 - 7.680: 99.8667% ( 1) 00:14:25.732 7.736 - 7.791: 99.8731% ( 1) 00:14:25.732 8.125 - 8.181: 99.8794% ( 1) 00:14:25.732 8.904 - 8.960: 99.8858% ( 1) 00:14:25.732 11.130 - 11.186: 99.8921% ( 1) 00:14:25.732 13.746 - 13.802: 99.8984% ( 1) 00:14:25.732 14.915 - 15.026: 99.9048% ( 1) 00:14:25.732 19.033 - 19.144: 99.9111% ( 1) 00:14:25.732 3989.148 - 4017.642: 100.0000% ( 14) 00:14:25.732 00:14:25.732 [2024-12-11 14:55:18.388207] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:25.732 Complete histogram 00:14:25.732 ================== 00:14:25.732 Range in us Cumulative Count 00:14:25.732 1.774 - 1.781: 0.0127% ( 2) 00:14:25.732 1.781 - 1.795: 0.1714% ( 25) 00:14:25.732 1.795 - 1.809: 0.6855% ( 81) 00:14:25.732 1.809 - 1.823: 1.0917% ( 64) 00:14:25.732 1.823 - 1.837: 1.6249% ( 84) 00:14:25.732 1.837 - 1.850: 14.2812% ( 1994) 00:14:25.732 1.850 - 1.864: 52.3897% ( 6004) 00:14:25.732 1.864 - 1.878: 83.9924% ( 4979) 00:14:25.732 1.878 - 1.892: 92.9990% ( 1419) 00:14:25.732 1.892 - 1.906: 95.5379% ( 400) 00:14:25.732 1.906 - 1.920: 96.4710% ( 147) 00:14:25.732 1.920 - 1.934: 97.5690% ( 173) 00:14:25.732 1.934 - 1.948: 98.6100% ( 164) 00:14:25.732 1.948 - 1.962: 99.0225% ( 65) 00:14:25.732 1.962 - 1.976: 99.1622% ( 22) 00:14:25.732 1.976 - 1.990: 99.2320% ( 11) 00:14:25.732 2.003 - 2.017: 99.2383% ( 1) 00:14:25.732 2.031 - 2.045: 99.2574% ( 3) 00:14:25.732 2.059 - 2.073: 99.2637% ( 1) 00:14:25.732 2.129 - 2.143: 99.2764% ( 2) 00:14:25.732 3.812 - 3.840: 99.2955% ( 3) 00:14:25.732 3.923 - 3.951: 99.3018% ( 1) 00:14:25.732 4.090 - 4.118: 99.3082% ( 1) 00:14:25.732 4.118 - 4.146: 99.3145% ( 1) 00:14:25.732 4.146 - 4.174: 99.3209% ( 1) 00:14:25.732 4.202 - 4.230: 99.3272% ( 1) 00:14:25.732 4.424 - 4.452: 99.3399% ( 2) 00:14:25.732 4.480 - 4.508: 99.3462% ( 1) 00:14:25.732 4.508 - 4.536: 99.3526% ( 1) 00:14:25.732 4.563 - 4.591: 99.3589% ( 1) 00:14:25.732 4.675 - 4.703: 99.3716% ( 2) 00:14:25.732 4.703 - 4.730: 99.3780% ( 1) 00:14:25.732 4.730 - 4.758: 99.3843% ( 1) 00:14:25.732 4.870 - 4.897: 99.3907% ( 1) 00:14:25.732 4.925 - 4.953: 99.4034% ( 2) 00:14:25.732 4.953 - 4.981: 99.4097% ( 1) 00:14:25.732 5.009 - 5.037: 99.4161% ( 1) 00:14:25.732 5.148 - 5.176: 99.4224% ( 1) 00:14:25.732 5.343 - 5.370: 99.4288% ( 1) 00:14:25.732 5.370 - 5.398: 99.4351% ( 1) 00:14:25.732 6.010 - 6.038: 99.4414% ( 1) 00:14:25.732 7.346 - 7.402: 99.4478% ( 1) 00:14:25.732 7.457 - 7.513: 99.4541% ( 1) 00:14:25.732 8.849 - 8.904: 99.4605% ( 1) 00:14:25.732 12.243 - 12.299: 99.4668% ( 1) 00:14:25.732 3903.666 - 3932.160: 99.4732% ( 1) 00:14:25.732 3989.148 - 4017.642: 100.0000% ( 83) 00:14:25.732 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:25.732 [ 00:14:25.732 { 00:14:25.732 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:25.732 "subtype": "Discovery", 00:14:25.732 "listen_addresses": [], 00:14:25.732 "allow_any_host": true, 00:14:25.732 "hosts": [] 00:14:25.732 }, 00:14:25.732 { 00:14:25.732 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:25.732 "subtype": "NVMe", 00:14:25.732 "listen_addresses": [ 00:14:25.732 { 00:14:25.732 "trtype": "VFIOUSER", 00:14:25.732 "adrfam": "IPv4", 00:14:25.732 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:25.732 "trsvcid": "0" 00:14:25.732 } 00:14:25.732 ], 00:14:25.732 "allow_any_host": true, 00:14:25.732 "hosts": [], 00:14:25.732 "serial_number": "SPDK1", 00:14:25.732 "model_number": "SPDK bdev Controller", 00:14:25.732 "max_namespaces": 32, 00:14:25.732 "min_cntlid": 1, 00:14:25.732 "max_cntlid": 65519, 00:14:25.732 "namespaces": [ 00:14:25.732 { 00:14:25.732 "nsid": 1, 00:14:25.732 "bdev_name": "Malloc1", 00:14:25.732 "name": "Malloc1", 00:14:25.732 "nguid": "BA15968107FC4172AAE0A3288E68C95F", 00:14:25.732 "uuid": "ba159681-07fc-4172-aae0-a3288e68c95f" 00:14:25.732 }, 00:14:25.732 { 00:14:25.732 "nsid": 2, 00:14:25.732 "bdev_name": "Malloc3", 00:14:25.732 "name": "Malloc3", 00:14:25.732 "nguid": "FA22B4BE875443E7A13B68C634A0B76E", 00:14:25.732 "uuid": "fa22b4be-8754-43e7-a13b-68c634a0b76e" 00:14:25.732 } 00:14:25.732 ] 00:14:25.732 }, 00:14:25.732 { 00:14:25.732 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:25.732 "subtype": "NVMe", 00:14:25.732 "listen_addresses": [ 00:14:25.732 { 00:14:25.732 "trtype": "VFIOUSER", 00:14:25.732 "adrfam": "IPv4", 00:14:25.732 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:25.732 "trsvcid": "0" 00:14:25.732 } 00:14:25.732 ], 00:14:25.732 "allow_any_host": true, 00:14:25.732 "hosts": [], 00:14:25.732 "serial_number": "SPDK2", 00:14:25.732 "model_number": "SPDK bdev Controller", 00:14:25.732 "max_namespaces": 32, 00:14:25.732 "min_cntlid": 1, 00:14:25.732 "max_cntlid": 65519, 00:14:25.732 "namespaces": [ 00:14:25.732 { 00:14:25.732 "nsid": 1, 00:14:25.732 "bdev_name": "Malloc2", 00:14:25.732 "name": "Malloc2", 00:14:25.732 "nguid": "FB9AF4D989734FE7B8A35AE757A565E9", 00:14:25.732 "uuid": "fb9af4d9-8973-4fe7-b8a3-5ae757a565e9" 00:14:25.732 } 00:14:25.732 ] 00:14:25.732 } 00:14:25.732 ] 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3081172 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:25.732 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:25.733 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:25.733 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:25.733 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:25.733 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:25.733 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:25.991 [2024-12-11 14:55:18.803125] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:25.991 Malloc4 00:14:25.991 14:55:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:26.251 [2024-12-11 14:55:19.060120] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:26.251 Asynchronous Event Request test 00:14:26.251 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.251 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:26.251 Registering asynchronous event callbacks... 00:14:26.251 Starting namespace attribute notice tests for all controllers... 00:14:26.251 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:26.251 aer_cb - Changed Namespace 00:14:26.251 Cleaning up... 00:14:26.251 [ 00:14:26.251 { 00:14:26.251 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:26.251 "subtype": "Discovery", 00:14:26.251 "listen_addresses": [], 00:14:26.251 "allow_any_host": true, 00:14:26.251 "hosts": [] 00:14:26.251 }, 00:14:26.251 { 00:14:26.251 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:26.251 "subtype": "NVMe", 00:14:26.251 "listen_addresses": [ 00:14:26.251 { 00:14:26.251 "trtype": "VFIOUSER", 00:14:26.251 "adrfam": "IPv4", 00:14:26.251 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:26.251 "trsvcid": "0" 00:14:26.251 } 00:14:26.251 ], 00:14:26.251 "allow_any_host": true, 00:14:26.251 "hosts": [], 00:14:26.251 "serial_number": "SPDK1", 00:14:26.251 "model_number": "SPDK bdev Controller", 00:14:26.251 "max_namespaces": 32, 00:14:26.251 "min_cntlid": 1, 00:14:26.251 "max_cntlid": 65519, 00:14:26.251 "namespaces": [ 00:14:26.251 { 00:14:26.251 "nsid": 1, 00:14:26.251 "bdev_name": "Malloc1", 00:14:26.251 "name": "Malloc1", 00:14:26.251 "nguid": "BA15968107FC4172AAE0A3288E68C95F", 00:14:26.251 "uuid": "ba159681-07fc-4172-aae0-a3288e68c95f" 00:14:26.251 }, 00:14:26.251 { 00:14:26.251 "nsid": 2, 00:14:26.251 "bdev_name": "Malloc3", 00:14:26.251 "name": "Malloc3", 00:14:26.251 "nguid": "FA22B4BE875443E7A13B68C634A0B76E", 00:14:26.251 "uuid": "fa22b4be-8754-43e7-a13b-68c634a0b76e" 00:14:26.251 } 00:14:26.251 ] 00:14:26.251 }, 00:14:26.251 { 00:14:26.251 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:26.251 "subtype": "NVMe", 00:14:26.251 "listen_addresses": [ 00:14:26.251 { 00:14:26.251 "trtype": "VFIOUSER", 00:14:26.251 "adrfam": "IPv4", 00:14:26.251 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:26.251 "trsvcid": "0" 00:14:26.251 } 00:14:26.251 ], 00:14:26.251 "allow_any_host": true, 00:14:26.251 "hosts": [], 00:14:26.251 "serial_number": "SPDK2", 00:14:26.251 "model_number": "SPDK bdev Controller", 00:14:26.251 "max_namespaces": 32, 00:14:26.251 "min_cntlid": 1, 00:14:26.251 "max_cntlid": 65519, 00:14:26.251 "namespaces": [ 00:14:26.251 { 00:14:26.251 "nsid": 1, 00:14:26.251 "bdev_name": "Malloc2", 00:14:26.251 "name": "Malloc2", 00:14:26.251 "nguid": "FB9AF4D989734FE7B8A35AE757A565E9", 00:14:26.251 "uuid": "fb9af4d9-8973-4fe7-b8a3-5ae757a565e9" 00:14:26.251 }, 00:14:26.251 { 00:14:26.251 "nsid": 2, 00:14:26.251 "bdev_name": "Malloc4", 00:14:26.251 "name": "Malloc4", 00:14:26.251 "nguid": "36102C1FBF6E42CE8F4FFE9A03F1638F", 00:14:26.251 "uuid": "36102c1f-bf6e-42ce-8f4f-fe9a03f1638f" 00:14:26.251 } 00:14:26.251 ] 00:14:26.251 } 00:14:26.251 ] 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3081172 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3073441 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3073441 ']' 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3073441 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.251 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3073441 00:14:26.510 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.510 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.510 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3073441' 00:14:26.510 killing process with pid 3073441 00:14:26.510 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3073441 00:14:26.510 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3073441 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3081283 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3081283' 00:14:26.770 Process pid: 3081283 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3081283 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3081283 ']' 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.770 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:26.770 [2024-12-11 14:55:19.631581] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:26.770 [2024-12-11 14:55:19.632494] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:26.770 [2024-12-11 14:55:19.632534] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.770 [2024-12-11 14:55:19.706190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.770 [2024-12-11 14:55:19.742964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.770 [2024-12-11 14:55:19.743004] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.770 [2024-12-11 14:55:19.743012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.770 [2024-12-11 14:55:19.743018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.770 [2024-12-11 14:55:19.743023] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.770 [2024-12-11 14:55:19.744598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.770 [2024-12-11 14:55:19.744712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.770 [2024-12-11 14:55:19.744798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.770 [2024-12-11 14:55:19.744799] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.770 [2024-12-11 14:55:19.813587] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:26.770 [2024-12-11 14:55:19.814294] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:26.770 [2024-12-11 14:55:19.814685] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:26.770 [2024-12-11 14:55:19.815071] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:26.770 [2024-12-11 14:55:19.815111] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:27.029 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.029 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:27.029 14:55:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:27.978 14:55:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:28.237 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:28.237 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:28.237 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.237 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:28.237 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:28.237 Malloc1 00:14:28.497 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:28.497 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:28.758 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:29.017 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:29.017 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:29.017 14:55:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:29.275 Malloc2 00:14:29.275 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:29.275 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:29.534 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3081283 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3081283 ']' 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3081283 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3081283 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3081283' 00:14:29.793 killing process with pid 3081283 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3081283 00:14:29.793 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3081283 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:30.053 00:14:30.053 real 0m50.932s 00:14:30.053 user 3m16.983s 00:14:30.053 sys 0m3.322s 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 ************************************ 00:14:30.053 END TEST nvmf_vfio_user 00:14:30.053 ************************************ 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.053 14:55:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.053 ************************************ 00:14:30.053 START TEST nvmf_vfio_user_nvme_compliance 00:14:30.053 ************************************ 00:14:30.053 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:30.313 * Looking for test storage... 00:14:30.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.313 --rc genhtml_branch_coverage=1 00:14:30.313 --rc genhtml_function_coverage=1 00:14:30.313 --rc genhtml_legend=1 00:14:30.313 --rc geninfo_all_blocks=1 00:14:30.313 --rc geninfo_unexecuted_blocks=1 00:14:30.313 00:14:30.313 ' 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.313 --rc genhtml_branch_coverage=1 00:14:30.313 --rc genhtml_function_coverage=1 00:14:30.313 --rc genhtml_legend=1 00:14:30.313 --rc geninfo_all_blocks=1 00:14:30.313 --rc geninfo_unexecuted_blocks=1 00:14:30.313 00:14:30.313 ' 00:14:30.313 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:30.313 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.313 --rc genhtml_branch_coverage=1 00:14:30.314 --rc genhtml_function_coverage=1 00:14:30.314 --rc genhtml_legend=1 00:14:30.314 --rc geninfo_all_blocks=1 00:14:30.314 --rc geninfo_unexecuted_blocks=1 00:14:30.314 00:14:30.314 ' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:30.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.314 --rc genhtml_branch_coverage=1 00:14:30.314 --rc genhtml_function_coverage=1 00:14:30.314 --rc genhtml_legend=1 00:14:30.314 --rc geninfo_all_blocks=1 00:14:30.314 --rc geninfo_unexecuted_blocks=1 00:14:30.314 00:14:30.314 ' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.314 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3082049 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3082049' 00:14:30.314 Process pid: 3082049 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3082049 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3082049 ']' 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.314 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:30.314 [2024-12-11 14:55:23.288558] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:14:30.314 [2024-12-11 14:55:23.288607] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.574 [2024-12-11 14:55:23.362117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.574 [2024-12-11 14:55:23.402840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.574 [2024-12-11 14:55:23.402876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.574 [2024-12-11 14:55:23.402883] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.574 [2024-12-11 14:55:23.402889] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.574 [2024-12-11 14:55:23.402894] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.574 [2024-12-11 14:55:23.404204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.574 [2024-12-11 14:55:23.404308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.574 [2024-12-11 14:55:23.404309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.574 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.574 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:30.574 14:55:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.511 malloc0 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.511 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.770 14:55:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:31.770 00:14:31.770 00:14:31.770 CUnit - A unit testing framework for C - Version 2.1-3 00:14:31.770 http://cunit.sourceforge.net/ 00:14:31.770 00:14:31.770 00:14:31.770 Suite: nvme_compliance 00:14:31.770 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-11 14:55:24.739584] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:31.770 [2024-12-11 14:55:24.740927] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:31.770 [2024-12-11 14:55:24.740943] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:31.770 [2024-12-11 14:55:24.740950] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:31.770 [2024-12-11 14:55:24.742606] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:31.770 passed 00:14:32.029 Test: admin_identify_ctrlr_verify_fused ...[2024-12-11 14:55:24.822163] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.029 [2024-12-11 14:55:24.825177] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.029 passed 00:14:32.029 Test: admin_identify_ns ...[2024-12-11 14:55:24.901502] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.029 [2024-12-11 14:55:24.965180] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:32.029 [2024-12-11 14:55:24.973169] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:32.029 [2024-12-11 14:55:24.994254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.029 passed 00:14:32.029 Test: admin_get_features_mandatory_features ...[2024-12-11 14:55:25.068222] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.029 [2024-12-11 14:55:25.072249] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.287 passed 00:14:32.287 Test: admin_get_features_optional_features ...[2024-12-11 14:55:25.150752] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.287 [2024-12-11 14:55:25.153771] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.287 passed 00:14:32.287 Test: admin_set_features_number_of_queues ...[2024-12-11 14:55:25.229564] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.287 [2024-12-11 14:55:25.334258] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.546 passed 00:14:32.546 Test: admin_get_log_page_mandatory_logs ...[2024-12-11 14:55:25.411020] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.546 [2024-12-11 14:55:25.414045] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.546 passed 00:14:32.546 Test: admin_get_log_page_with_lpo ...[2024-12-11 14:55:25.492016] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.546 [2024-12-11 14:55:25.560166] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:32.546 [2024-12-11 14:55:25.573243] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.806 passed 00:14:32.806 Test: fabric_property_get ...[2024-12-11 14:55:25.648453] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.806 [2024-12-11 14:55:25.649684] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:32.806 [2024-12-11 14:55:25.651469] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.806 passed 00:14:32.806 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-11 14:55:25.730979] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:32.806 [2024-12-11 14:55:25.732217] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:32.806 [2024-12-11 14:55:25.733997] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:32.806 passed 00:14:32.806 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-11 14:55:25.812539] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.065 [2024-12-11 14:55:25.897168] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.065 [2024-12-11 14:55:25.913168] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.065 [2024-12-11 14:55:25.918248] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.065 passed 00:14:33.065 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-11 14:55:25.992273] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.065 [2024-12-11 14:55:25.993510] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:33.065 [2024-12-11 14:55:25.995297] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.065 passed 00:14:33.065 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-11 14:55:26.073102] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.345 [2024-12-11 14:55:26.148167] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:33.345 [2024-12-11 14:55:26.172164] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:33.345 [2024-12-11 14:55:26.177249] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.345 passed 00:14:33.345 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-11 14:55:26.254033] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.345 [2024-12-11 14:55:26.255287] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:33.345 [2024-12-11 14:55:26.255313] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:33.345 [2024-12-11 14:55:26.257055] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.345 passed 00:14:33.345 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-11 14:55:26.334841] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.640 [2024-12-11 14:55:26.427169] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:33.640 [2024-12-11 14:55:26.435166] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:33.640 [2024-12-11 14:55:26.443167] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:33.640 [2024-12-11 14:55:26.451164] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:33.640 [2024-12-11 14:55:26.480255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.640 passed 00:14:33.640 Test: admin_create_io_sq_verify_pc ...[2024-12-11 14:55:26.557158] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:33.641 [2024-12-11 14:55:26.575172] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:33.641 [2024-12-11 14:55:26.592414] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:33.641 passed 00:14:33.641 Test: admin_create_io_qp_max_qps ...[2024-12-11 14:55:26.667928] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.089 [2024-12-11 14:55:27.773167] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:35.348 [2024-12-11 14:55:28.158268] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.348 passed 00:14:35.348 Test: admin_create_io_sq_shared_cq ...[2024-12-11 14:55:28.238544] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:35.348 [2024-12-11 14:55:28.370165] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:35.607 [2024-12-11 14:55:28.407213] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:35.607 passed 00:14:35.607 00:14:35.607 Run Summary: Type Total Ran Passed Failed Inactive 00:14:35.607 suites 1 1 n/a 0 0 00:14:35.607 tests 18 18 18 0 0 00:14:35.607 asserts 360 360 360 0 n/a 00:14:35.607 00:14:35.607 Elapsed time = 1.507 seconds 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3082049 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3082049 ']' 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3082049 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3082049 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3082049' 00:14:35.607 killing process with pid 3082049 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3082049 00:14:35.607 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3082049 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:35.866 00:14:35.866 real 0m5.654s 00:14:35.866 user 0m15.880s 00:14:35.866 sys 0m0.498s 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:35.866 ************************************ 00:14:35.866 END TEST nvmf_vfio_user_nvme_compliance 00:14:35.866 ************************************ 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.866 ************************************ 00:14:35.866 START TEST nvmf_vfio_user_fuzz 00:14:35.866 ************************************ 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:35.866 * Looking for test storage... 00:14:35.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:35.866 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:36.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.126 --rc genhtml_branch_coverage=1 00:14:36.126 --rc genhtml_function_coverage=1 00:14:36.126 --rc genhtml_legend=1 00:14:36.126 --rc geninfo_all_blocks=1 00:14:36.126 --rc geninfo_unexecuted_blocks=1 00:14:36.126 00:14:36.126 ' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:36.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.126 --rc genhtml_branch_coverage=1 00:14:36.126 --rc genhtml_function_coverage=1 00:14:36.126 --rc genhtml_legend=1 00:14:36.126 --rc geninfo_all_blocks=1 00:14:36.126 --rc geninfo_unexecuted_blocks=1 00:14:36.126 00:14:36.126 ' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:36.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.126 --rc genhtml_branch_coverage=1 00:14:36.126 --rc genhtml_function_coverage=1 00:14:36.126 --rc genhtml_legend=1 00:14:36.126 --rc geninfo_all_blocks=1 00:14:36.126 --rc geninfo_unexecuted_blocks=1 00:14:36.126 00:14:36.126 ' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:36.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.126 --rc genhtml_branch_coverage=1 00:14:36.126 --rc genhtml_function_coverage=1 00:14:36.126 --rc genhtml_legend=1 00:14:36.126 --rc geninfo_all_blocks=1 00:14:36.126 --rc geninfo_unexecuted_blocks=1 00:14:36.126 00:14:36.126 ' 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:36.126 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3083036 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3083036' 00:14:36.127 Process pid: 3083036 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3083036 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3083036 ']' 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.127 14:55:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:36.386 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.386 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:36.386 14:55:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.323 malloc0 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.323 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:37.324 14:55:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:09.408 Fuzzing completed. Shutting down the fuzz application 00:15:09.408 00:15:09.408 Dumping successful admin opcodes: 00:15:09.408 9, 10, 00:15:09.408 Dumping successful io opcodes: 00:15:09.408 0, 00:15:09.408 NS: 0x20000081ef00 I/O qp, Total commands completed: 1002054, total successful commands: 3929, random_seed: 796481472 00:15:09.408 NS: 0x20000081ef00 admin qp, Total commands completed: 247024, total successful commands: 58, random_seed: 3440574336 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3083036 ']' 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3083036' 00:15:09.408 killing process with pid 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3083036 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:09.408 00:15:09.408 real 0m32.211s 00:15:09.408 user 0m30.223s 00:15:09.408 sys 0m30.905s 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:09.408 ************************************ 00:15:09.408 END TEST nvmf_vfio_user_fuzz 00:15:09.408 ************************************ 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:09.408 14:56:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:09.408 ************************************ 00:15:09.408 START TEST nvmf_auth_target 00:15:09.408 ************************************ 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:09.408 * Looking for test storage... 00:15:09.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:09.408 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.409 --rc genhtml_branch_coverage=1 00:15:09.409 --rc genhtml_function_coverage=1 00:15:09.409 --rc genhtml_legend=1 00:15:09.409 --rc geninfo_all_blocks=1 00:15:09.409 --rc geninfo_unexecuted_blocks=1 00:15:09.409 00:15:09.409 ' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.409 --rc genhtml_branch_coverage=1 00:15:09.409 --rc genhtml_function_coverage=1 00:15:09.409 --rc genhtml_legend=1 00:15:09.409 --rc geninfo_all_blocks=1 00:15:09.409 --rc geninfo_unexecuted_blocks=1 00:15:09.409 00:15:09.409 ' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.409 --rc genhtml_branch_coverage=1 00:15:09.409 --rc genhtml_function_coverage=1 00:15:09.409 --rc genhtml_legend=1 00:15:09.409 --rc geninfo_all_blocks=1 00:15:09.409 --rc geninfo_unexecuted_blocks=1 00:15:09.409 00:15:09.409 ' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:09.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.409 --rc genhtml_branch_coverage=1 00:15:09.409 --rc genhtml_function_coverage=1 00:15:09.409 --rc genhtml_legend=1 00:15:09.409 --rc geninfo_all_blocks=1 00:15:09.409 --rc geninfo_unexecuted_blocks=1 00:15:09.409 00:15:09.409 ' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:09.409 14:56:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:14.687 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:14.688 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:14.688 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:14.688 Found net devices under 0000:86:00.0: cvl_0_0 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:14.688 Found net devices under 0000:86:00.1: cvl_0_1 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.688 14:56:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:14.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:15:14.688 00:15:14.688 --- 10.0.0.2 ping statistics --- 00:15:14.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.688 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:15:14.688 00:15:14.688 --- 10.0.0.1 ping statistics --- 00:15:14.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.688 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3091340 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3091340 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3091340 ']' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3091380 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.688 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5adc4cbbd6c27869e23eb5fbfafeb1fe3e61aaaa6fa7cd60 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uQv 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5adc4cbbd6c27869e23eb5fbfafeb1fe3e61aaaa6fa7cd60 0 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5adc4cbbd6c27869e23eb5fbfafeb1fe3e61aaaa6fa7cd60 0 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5adc4cbbd6c27869e23eb5fbfafeb1fe3e61aaaa6fa7cd60 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uQv 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uQv 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.uQv 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3762331da7227d2cf721474bc1181e93f1c72137a06a510719d28cf4a73d0204 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3Ww 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3762331da7227d2cf721474bc1181e93f1c72137a06a510719d28cf4a73d0204 3 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3762331da7227d2cf721474bc1181e93f1c72137a06a510719d28cf4a73d0204 3 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3762331da7227d2cf721474bc1181e93f1c72137a06a510719d28cf4a73d0204 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3Ww 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3Ww 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3Ww 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6375b957d51986c1ed9e76341ce5dbbc 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.K4c 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6375b957d51986c1ed9e76341ce5dbbc 1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6375b957d51986c1ed9e76341ce5dbbc 1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6375b957d51986c1ed9e76341ce5dbbc 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.K4c 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.K4c 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.K4c 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7d048f01721fdff9837a128a1154b74f9ee811bd0783d093 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BRj 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7d048f01721fdff9837a128a1154b74f9ee811bd0783d093 2 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7d048f01721fdff9837a128a1154b74f9ee811bd0783d093 2 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7d048f01721fdff9837a128a1154b74f9ee811bd0783d093 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:14.689 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BRj 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BRj 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BRj 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a915a660eef50d623812853b47cd81dbf0a2464ba33a5d40 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Jiu 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a915a660eef50d623812853b47cd81dbf0a2464ba33a5d40 2 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a915a660eef50d623812853b47cd81dbf0a2464ba33a5d40 2 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a915a660eef50d623812853b47cd81dbf0a2464ba33a5d40 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Jiu 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Jiu 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Jiu 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9895d8d54408343b07aa5b2bd6837213 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kwF 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9895d8d54408343b07aa5b2bd6837213 1 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9895d8d54408343b07aa5b2bd6837213 1 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9895d8d54408343b07aa5b2bd6837213 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kwF 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kwF 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kwF 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3a05a5b8235920942f513d62d87a5368f24b11fe479e347d1271870876d21575 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1xO 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3a05a5b8235920942f513d62d87a5368f24b11fe479e347d1271870876d21575 3 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3a05a5b8235920942f513d62d87a5368f24b11fe479e347d1271870876d21575 3 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:15:14.949 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3a05a5b8235920942f513d62d87a5368f24b11fe479e347d1271870876d21575 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1xO 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1xO 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.1xO 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3091340 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3091340 ']' 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.950 14:56:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3091380 /var/tmp/host.sock 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3091380 ']' 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:15.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.208 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uQv 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.uQv 00:15:15.467 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.uQv 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3Ww ]] 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Ww 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Ww 00:15:15.726 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Ww 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.K4c 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.K4c 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.K4c 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BRj ]] 00:15:15.985 14:56:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BRj 00:15:15.985 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.985 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.985 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BRj 00:15:15.985 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BRj 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Jiu 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Jiu 00:15:16.244 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Jiu 00:15:16.503 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kwF ]] 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kwF 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kwF 00:15:16.504 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kwF 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1xO 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.762 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.1xO 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.1xO 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:16.763 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.022 14:56:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:17.281 00:15:17.281 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.281 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.281 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.540 { 00:15:17.540 "cntlid": 1, 00:15:17.540 "qid": 0, 00:15:17.540 "state": "enabled", 00:15:17.540 "thread": "nvmf_tgt_poll_group_000", 00:15:17.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.540 "listen_address": { 00:15:17.540 "trtype": "TCP", 00:15:17.540 "adrfam": "IPv4", 00:15:17.540 "traddr": "10.0.0.2", 00:15:17.540 "trsvcid": "4420" 00:15:17.540 }, 00:15:17.540 "peer_address": { 00:15:17.540 "trtype": "TCP", 00:15:17.540 "adrfam": "IPv4", 00:15:17.540 "traddr": "10.0.0.1", 00:15:17.540 "trsvcid": "54206" 00:15:17.540 }, 00:15:17.540 "auth": { 00:15:17.540 "state": "completed", 00:15:17.540 "digest": "sha256", 00:15:17.540 "dhgroup": "null" 00:15:17.540 } 00:15:17.540 } 00:15:17.540 ]' 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:17.540 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.799 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.799 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.799 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.799 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:17.799 14:56:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.367 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.626 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.627 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.627 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.627 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.627 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.627 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.886 00:15:18.886 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.886 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.886 14:56:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:19.145 { 00:15:19.145 "cntlid": 3, 00:15:19.145 "qid": 0, 00:15:19.145 "state": "enabled", 00:15:19.145 "thread": "nvmf_tgt_poll_group_000", 00:15:19.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:19.145 "listen_address": { 00:15:19.145 "trtype": "TCP", 00:15:19.145 "adrfam": "IPv4", 00:15:19.145 "traddr": "10.0.0.2", 00:15:19.145 "trsvcid": "4420" 00:15:19.145 }, 00:15:19.145 "peer_address": { 00:15:19.145 "trtype": "TCP", 00:15:19.145 "adrfam": "IPv4", 00:15:19.145 "traddr": "10.0.0.1", 00:15:19.145 "trsvcid": "54228" 00:15:19.145 }, 00:15:19.145 "auth": { 00:15:19.145 "state": "completed", 00:15:19.145 "digest": "sha256", 00:15:19.145 "dhgroup": "null" 00:15:19.145 } 00:15:19.145 } 00:15:19.145 ]' 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.145 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:19.404 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:19.404 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:19.404 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.404 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.404 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.661 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:19.661 14:56:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.228 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.487 00:15:20.487 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.487 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.487 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.746 { 00:15:20.746 "cntlid": 5, 00:15:20.746 "qid": 0, 00:15:20.746 "state": "enabled", 00:15:20.746 "thread": "nvmf_tgt_poll_group_000", 00:15:20.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.746 "listen_address": { 00:15:20.746 "trtype": "TCP", 00:15:20.746 "adrfam": "IPv4", 00:15:20.746 "traddr": "10.0.0.2", 00:15:20.746 "trsvcid": "4420" 00:15:20.746 }, 00:15:20.746 "peer_address": { 00:15:20.746 "trtype": "TCP", 00:15:20.746 "adrfam": "IPv4", 00:15:20.746 "traddr": "10.0.0.1", 00:15:20.746 "trsvcid": "54248" 00:15:20.746 }, 00:15:20.746 "auth": { 00:15:20.746 "state": "completed", 00:15:20.746 "digest": "sha256", 00:15:20.746 "dhgroup": "null" 00:15:20.746 } 00:15:20.746 } 00:15:20.746 ]' 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.746 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:21.005 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:21.005 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:21.005 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.005 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.005 14:56:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.005 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:21.005 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:21.573 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.573 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.832 14:56:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.091 00:15:22.091 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.091 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.091 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.349 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.349 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.349 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.349 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.349 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.350 { 00:15:22.350 "cntlid": 7, 00:15:22.350 "qid": 0, 00:15:22.350 "state": "enabled", 00:15:22.350 "thread": "nvmf_tgt_poll_group_000", 00:15:22.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.350 "listen_address": { 00:15:22.350 "trtype": "TCP", 00:15:22.350 "adrfam": "IPv4", 00:15:22.350 "traddr": "10.0.0.2", 00:15:22.350 "trsvcid": "4420" 00:15:22.350 }, 00:15:22.350 "peer_address": { 00:15:22.350 "trtype": "TCP", 00:15:22.350 "adrfam": "IPv4", 00:15:22.350 "traddr": "10.0.0.1", 00:15:22.350 "trsvcid": "54268" 00:15:22.350 }, 00:15:22.350 "auth": { 00:15:22.350 "state": "completed", 00:15:22.350 "digest": "sha256", 00:15:22.350 "dhgroup": "null" 00:15:22.350 } 00:15:22.350 } 00:15:22.350 ]' 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:22.350 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.608 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.608 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.608 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.608 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:22.609 14:56:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.176 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.435 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.436 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.695 00:15:23.695 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:23.695 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:23.695 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.953 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.953 { 00:15:23.953 "cntlid": 9, 00:15:23.953 "qid": 0, 00:15:23.953 "state": "enabled", 00:15:23.953 "thread": "nvmf_tgt_poll_group_000", 00:15:23.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:23.953 "listen_address": { 00:15:23.953 "trtype": "TCP", 00:15:23.953 "adrfam": "IPv4", 00:15:23.953 "traddr": "10.0.0.2", 00:15:23.953 "trsvcid": "4420" 00:15:23.953 }, 00:15:23.953 "peer_address": { 00:15:23.954 "trtype": "TCP", 00:15:23.954 "adrfam": "IPv4", 00:15:23.954 "traddr": "10.0.0.1", 00:15:23.954 "trsvcid": "54290" 00:15:23.954 }, 00:15:23.954 "auth": { 00:15:23.954 "state": "completed", 00:15:23.954 "digest": "sha256", 00:15:23.954 "dhgroup": "ffdhe2048" 00:15:23.954 } 00:15:23.954 } 00:15:23.954 ]' 00:15:23.954 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.954 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.954 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.954 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.954 14:56:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.213 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.213 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.213 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.213 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:24.213 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.781 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:24.781 14:56:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.040 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.299 00:15:25.299 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.299 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.299 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.558 { 00:15:25.558 "cntlid": 11, 00:15:25.558 "qid": 0, 00:15:25.558 "state": "enabled", 00:15:25.558 "thread": "nvmf_tgt_poll_group_000", 00:15:25.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:25.558 "listen_address": { 00:15:25.558 "trtype": "TCP", 00:15:25.558 "adrfam": "IPv4", 00:15:25.558 "traddr": "10.0.0.2", 00:15:25.558 "trsvcid": "4420" 00:15:25.558 }, 00:15:25.558 "peer_address": { 00:15:25.558 "trtype": "TCP", 00:15:25.558 "adrfam": "IPv4", 00:15:25.558 "traddr": "10.0.0.1", 00:15:25.558 "trsvcid": "54308" 00:15:25.558 }, 00:15:25.558 "auth": { 00:15:25.558 "state": "completed", 00:15:25.558 "digest": "sha256", 00:15:25.558 "dhgroup": "ffdhe2048" 00:15:25.558 } 00:15:25.558 } 00:15:25.558 ]' 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.558 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.817 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.817 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.817 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.817 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:25.818 14:56:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.385 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.644 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.645 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.903 00:15:26.904 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.904 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.904 14:56:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.163 { 00:15:27.163 "cntlid": 13, 00:15:27.163 "qid": 0, 00:15:27.163 "state": "enabled", 00:15:27.163 "thread": "nvmf_tgt_poll_group_000", 00:15:27.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.163 "listen_address": { 00:15:27.163 "trtype": "TCP", 00:15:27.163 "adrfam": "IPv4", 00:15:27.163 "traddr": "10.0.0.2", 00:15:27.163 "trsvcid": "4420" 00:15:27.163 }, 00:15:27.163 "peer_address": { 00:15:27.163 "trtype": "TCP", 00:15:27.163 "adrfam": "IPv4", 00:15:27.163 "traddr": "10.0.0.1", 00:15:27.163 "trsvcid": "41112" 00:15:27.163 }, 00:15:27.163 "auth": { 00:15:27.163 "state": "completed", 00:15:27.163 "digest": "sha256", 00:15:27.163 "dhgroup": "ffdhe2048" 00:15:27.163 } 00:15:27.163 } 00:15:27.163 ]' 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.163 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.422 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:27.422 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:27.992 14:56:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:27.992 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.276 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:28.549 00:15:28.549 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.549 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.549 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:28.808 { 00:15:28.808 "cntlid": 15, 00:15:28.808 "qid": 0, 00:15:28.808 "state": "enabled", 00:15:28.808 "thread": "nvmf_tgt_poll_group_000", 00:15:28.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:28.808 "listen_address": { 00:15:28.808 "trtype": "TCP", 00:15:28.808 "adrfam": "IPv4", 00:15:28.808 "traddr": "10.0.0.2", 00:15:28.808 "trsvcid": "4420" 00:15:28.808 }, 00:15:28.808 "peer_address": { 00:15:28.808 "trtype": "TCP", 00:15:28.808 "adrfam": "IPv4", 00:15:28.808 "traddr": "10.0.0.1", 00:15:28.808 "trsvcid": "41142" 00:15:28.808 }, 00:15:28.808 "auth": { 00:15:28.808 "state": "completed", 00:15:28.808 "digest": "sha256", 00:15:28.808 "dhgroup": "ffdhe2048" 00:15:28.808 } 00:15:28.808 } 00:15:28.808 ]' 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.808 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.809 14:56:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.067 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:29.067 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.635 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.894 14:56:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:30.153 00:15:30.153 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.153 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.153 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.412 { 00:15:30.412 "cntlid": 17, 00:15:30.412 "qid": 0, 00:15:30.412 "state": "enabled", 00:15:30.412 "thread": "nvmf_tgt_poll_group_000", 00:15:30.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.412 "listen_address": { 00:15:30.412 "trtype": "TCP", 00:15:30.412 "adrfam": "IPv4", 00:15:30.412 "traddr": "10.0.0.2", 00:15:30.412 "trsvcid": "4420" 00:15:30.412 }, 00:15:30.412 "peer_address": { 00:15:30.412 "trtype": "TCP", 00:15:30.412 "adrfam": "IPv4", 00:15:30.412 "traddr": "10.0.0.1", 00:15:30.412 "trsvcid": "41160" 00:15:30.412 }, 00:15:30.412 "auth": { 00:15:30.412 "state": "completed", 00:15:30.412 "digest": "sha256", 00:15:30.412 "dhgroup": "ffdhe3072" 00:15:30.412 } 00:15:30.412 } 00:15:30.412 ]' 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.412 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.672 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:30.672 14:56:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.239 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:31.498 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:31.498 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.498 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:31.498 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.499 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.757 00:15:31.757 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.757 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.757 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.016 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.017 { 00:15:32.017 "cntlid": 19, 00:15:32.017 "qid": 0, 00:15:32.017 "state": "enabled", 00:15:32.017 "thread": "nvmf_tgt_poll_group_000", 00:15:32.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.017 "listen_address": { 00:15:32.017 "trtype": "TCP", 00:15:32.017 "adrfam": "IPv4", 00:15:32.017 "traddr": "10.0.0.2", 00:15:32.017 "trsvcid": "4420" 00:15:32.017 }, 00:15:32.017 "peer_address": { 00:15:32.017 "trtype": "TCP", 00:15:32.017 "adrfam": "IPv4", 00:15:32.017 "traddr": "10.0.0.1", 00:15:32.017 "trsvcid": "41188" 00:15:32.017 }, 00:15:32.017 "auth": { 00:15:32.017 "state": "completed", 00:15:32.017 "digest": "sha256", 00:15:32.017 "dhgroup": "ffdhe3072" 00:15:32.017 } 00:15:32.017 } 00:15:32.017 ]' 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:32.017 14:56:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.017 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:32.017 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.017 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.017 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.017 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.275 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:32.275 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:32.843 14:56:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.102 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.361 00:15:33.361 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.361 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.361 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.620 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.620 { 00:15:33.620 "cntlid": 21, 00:15:33.620 "qid": 0, 00:15:33.620 "state": "enabled", 00:15:33.620 "thread": "nvmf_tgt_poll_group_000", 00:15:33.620 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.620 "listen_address": { 00:15:33.620 "trtype": "TCP", 00:15:33.620 "adrfam": "IPv4", 00:15:33.621 "traddr": "10.0.0.2", 00:15:33.621 "trsvcid": "4420" 00:15:33.621 }, 00:15:33.621 "peer_address": { 00:15:33.621 "trtype": "TCP", 00:15:33.621 "adrfam": "IPv4", 00:15:33.621 "traddr": "10.0.0.1", 00:15:33.621 "trsvcid": "41226" 00:15:33.621 }, 00:15:33.621 "auth": { 00:15:33.621 "state": "completed", 00:15:33.621 "digest": "sha256", 00:15:33.621 "dhgroup": "ffdhe3072" 00:15:33.621 } 00:15:33.621 } 00:15:33.621 ]' 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.621 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.880 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:33.880 14:56:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:34.447 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.707 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:34.965 00:15:34.965 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:34.965 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.965 14:56:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.224 { 00:15:35.224 "cntlid": 23, 00:15:35.224 "qid": 0, 00:15:35.224 "state": "enabled", 00:15:35.224 "thread": "nvmf_tgt_poll_group_000", 00:15:35.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.224 "listen_address": { 00:15:35.224 "trtype": "TCP", 00:15:35.224 "adrfam": "IPv4", 00:15:35.224 "traddr": "10.0.0.2", 00:15:35.224 "trsvcid": "4420" 00:15:35.224 }, 00:15:35.224 "peer_address": { 00:15:35.224 "trtype": "TCP", 00:15:35.224 "adrfam": "IPv4", 00:15:35.224 "traddr": "10.0.0.1", 00:15:35.224 "trsvcid": "41252" 00:15:35.224 }, 00:15:35.224 "auth": { 00:15:35.224 "state": "completed", 00:15:35.224 "digest": "sha256", 00:15:35.224 "dhgroup": "ffdhe3072" 00:15:35.224 } 00:15:35.224 } 00:15:35.224 ]' 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.224 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.483 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:35.483 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.051 14:56:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.311 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:36.569 00:15:36.569 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.569 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.569 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.828 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.828 { 00:15:36.828 "cntlid": 25, 00:15:36.828 "qid": 0, 00:15:36.828 "state": "enabled", 00:15:36.828 "thread": "nvmf_tgt_poll_group_000", 00:15:36.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:36.828 "listen_address": { 00:15:36.828 "trtype": "TCP", 00:15:36.828 "adrfam": "IPv4", 00:15:36.828 "traddr": "10.0.0.2", 00:15:36.828 "trsvcid": "4420" 00:15:36.828 }, 00:15:36.829 "peer_address": { 00:15:36.829 "trtype": "TCP", 00:15:36.829 "adrfam": "IPv4", 00:15:36.829 "traddr": "10.0.0.1", 00:15:36.829 "trsvcid": "54428" 00:15:36.829 }, 00:15:36.829 "auth": { 00:15:36.829 "state": "completed", 00:15:36.829 "digest": "sha256", 00:15:36.829 "dhgroup": "ffdhe4096" 00:15:36.829 } 00:15:36.829 } 00:15:36.829 ]' 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.829 14:56:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.088 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:37.088 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:37.656 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.915 14:56:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:38.181 00:15:38.181 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.181 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.181 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.448 { 00:15:38.448 "cntlid": 27, 00:15:38.448 "qid": 0, 00:15:38.448 "state": "enabled", 00:15:38.448 "thread": "nvmf_tgt_poll_group_000", 00:15:38.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.448 "listen_address": { 00:15:38.448 "trtype": "TCP", 00:15:38.448 "adrfam": "IPv4", 00:15:38.448 "traddr": "10.0.0.2", 00:15:38.448 "trsvcid": "4420" 00:15:38.448 }, 00:15:38.448 "peer_address": { 00:15:38.448 "trtype": "TCP", 00:15:38.448 "adrfam": "IPv4", 00:15:38.448 "traddr": "10.0.0.1", 00:15:38.448 "trsvcid": "54468" 00:15:38.448 }, 00:15:38.448 "auth": { 00:15:38.448 "state": "completed", 00:15:38.448 "digest": "sha256", 00:15:38.448 "dhgroup": "ffdhe4096" 00:15:38.448 } 00:15:38.448 } 00:15:38.448 ]' 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.448 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.707 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:38.707 14:56:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:39.276 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.535 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.794 00:15:39.794 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:39.794 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:39.794 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.053 { 00:15:40.053 "cntlid": 29, 00:15:40.053 "qid": 0, 00:15:40.053 "state": "enabled", 00:15:40.053 "thread": "nvmf_tgt_poll_group_000", 00:15:40.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.053 "listen_address": { 00:15:40.053 "trtype": "TCP", 00:15:40.053 "adrfam": "IPv4", 00:15:40.053 "traddr": "10.0.0.2", 00:15:40.053 "trsvcid": "4420" 00:15:40.053 }, 00:15:40.053 "peer_address": { 00:15:40.053 "trtype": "TCP", 00:15:40.053 "adrfam": "IPv4", 00:15:40.053 "traddr": "10.0.0.1", 00:15:40.053 "trsvcid": "54480" 00:15:40.053 }, 00:15:40.053 "auth": { 00:15:40.053 "state": "completed", 00:15:40.053 "digest": "sha256", 00:15:40.053 "dhgroup": "ffdhe4096" 00:15:40.053 } 00:15:40.053 } 00:15:40.053 ]' 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:40.053 14:56:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.053 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:40.053 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.053 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.053 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.053 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.312 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:40.312 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:40.880 14:56:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.139 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:41.398 00:15:41.399 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.399 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.399 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.658 { 00:15:41.658 "cntlid": 31, 00:15:41.658 "qid": 0, 00:15:41.658 "state": "enabled", 00:15:41.658 "thread": "nvmf_tgt_poll_group_000", 00:15:41.658 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.658 "listen_address": { 00:15:41.658 "trtype": "TCP", 00:15:41.658 "adrfam": "IPv4", 00:15:41.658 "traddr": "10.0.0.2", 00:15:41.658 "trsvcid": "4420" 00:15:41.658 }, 00:15:41.658 "peer_address": { 00:15:41.658 "trtype": "TCP", 00:15:41.658 "adrfam": "IPv4", 00:15:41.658 "traddr": "10.0.0.1", 00:15:41.658 "trsvcid": "54510" 00:15:41.658 }, 00:15:41.658 "auth": { 00:15:41.658 "state": "completed", 00:15:41.658 "digest": "sha256", 00:15:41.658 "dhgroup": "ffdhe4096" 00:15:41.658 } 00:15:41.658 } 00:15:41.658 ]' 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.658 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.917 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:41.917 14:56:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.484 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.742 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.743 14:56:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.311 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.311 { 00:15:43.311 "cntlid": 33, 00:15:43.311 "qid": 0, 00:15:43.311 "state": "enabled", 00:15:43.311 "thread": "nvmf_tgt_poll_group_000", 00:15:43.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.311 "listen_address": { 00:15:43.311 "trtype": "TCP", 00:15:43.311 "adrfam": "IPv4", 00:15:43.311 "traddr": "10.0.0.2", 00:15:43.311 "trsvcid": "4420" 00:15:43.311 }, 00:15:43.311 "peer_address": { 00:15:43.311 "trtype": "TCP", 00:15:43.311 "adrfam": "IPv4", 00:15:43.311 "traddr": "10.0.0.1", 00:15:43.311 "trsvcid": "54534" 00:15:43.311 }, 00:15:43.311 "auth": { 00:15:43.311 "state": "completed", 00:15:43.311 "digest": "sha256", 00:15:43.311 "dhgroup": "ffdhe6144" 00:15:43.311 } 00:15:43.311 } 00:15:43.311 ]' 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:43.311 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.570 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.570 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.570 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.570 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.570 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.829 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:43.829 14:56:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.397 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.398 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.966 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.966 { 00:15:44.966 "cntlid": 35, 00:15:44.966 "qid": 0, 00:15:44.966 "state": "enabled", 00:15:44.966 "thread": "nvmf_tgt_poll_group_000", 00:15:44.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:44.966 "listen_address": { 00:15:44.966 "trtype": "TCP", 00:15:44.966 "adrfam": "IPv4", 00:15:44.966 "traddr": "10.0.0.2", 00:15:44.966 "trsvcid": "4420" 00:15:44.966 }, 00:15:44.966 "peer_address": { 00:15:44.966 "trtype": "TCP", 00:15:44.966 "adrfam": "IPv4", 00:15:44.966 "traddr": "10.0.0.1", 00:15:44.966 "trsvcid": "54560" 00:15:44.966 }, 00:15:44.966 "auth": { 00:15:44.966 "state": "completed", 00:15:44.966 "digest": "sha256", 00:15:44.966 "dhgroup": "ffdhe6144" 00:15:44.966 } 00:15:44.966 } 00:15:44.966 ]' 00:15:44.966 14:56:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.225 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:45.225 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.225 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.225 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.225 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.226 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.226 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.485 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:45.485 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.053 14:56:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.313 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.573 00:15:46.573 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.573 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.573 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.832 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.832 { 00:15:46.832 "cntlid": 37, 00:15:46.832 "qid": 0, 00:15:46.832 "state": "enabled", 00:15:46.832 "thread": "nvmf_tgt_poll_group_000", 00:15:46.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.833 "listen_address": { 00:15:46.833 "trtype": "TCP", 00:15:46.833 "adrfam": "IPv4", 00:15:46.833 "traddr": "10.0.0.2", 00:15:46.833 "trsvcid": "4420" 00:15:46.833 }, 00:15:46.833 "peer_address": { 00:15:46.833 "trtype": "TCP", 00:15:46.833 "adrfam": "IPv4", 00:15:46.833 "traddr": "10.0.0.1", 00:15:46.833 "trsvcid": "39958" 00:15:46.833 }, 00:15:46.833 "auth": { 00:15:46.833 "state": "completed", 00:15:46.833 "digest": "sha256", 00:15:46.833 "dhgroup": "ffdhe6144" 00:15:46.833 } 00:15:46.833 } 00:15:46.833 ]' 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.833 14:56:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.092 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:47.092 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.659 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.659 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.660 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:47.918 14:56:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.177 00:15:48.177 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.177 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.177 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.436 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.436 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.437 { 00:15:48.437 "cntlid": 39, 00:15:48.437 "qid": 0, 00:15:48.437 "state": "enabled", 00:15:48.437 "thread": "nvmf_tgt_poll_group_000", 00:15:48.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.437 "listen_address": { 00:15:48.437 "trtype": "TCP", 00:15:48.437 "adrfam": "IPv4", 00:15:48.437 "traddr": "10.0.0.2", 00:15:48.437 "trsvcid": "4420" 00:15:48.437 }, 00:15:48.437 "peer_address": { 00:15:48.437 "trtype": "TCP", 00:15:48.437 "adrfam": "IPv4", 00:15:48.437 "traddr": "10.0.0.1", 00:15:48.437 "trsvcid": "39988" 00:15:48.437 }, 00:15:48.437 "auth": { 00:15:48.437 "state": "completed", 00:15:48.437 "digest": "sha256", 00:15:48.437 "dhgroup": "ffdhe6144" 00:15:48.437 } 00:15:48.437 } 00:15:48.437 ]' 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.437 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.696 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.696 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.696 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.696 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:48.696 14:56:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.264 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:49.523 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.091 00:15:50.091 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.091 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.091 14:56:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.350 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.350 { 00:15:50.350 "cntlid": 41, 00:15:50.350 "qid": 0, 00:15:50.350 "state": "enabled", 00:15:50.350 "thread": "nvmf_tgt_poll_group_000", 00:15:50.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.350 "listen_address": { 00:15:50.350 "trtype": "TCP", 00:15:50.350 "adrfam": "IPv4", 00:15:50.350 "traddr": "10.0.0.2", 00:15:50.350 "trsvcid": "4420" 00:15:50.350 }, 00:15:50.350 "peer_address": { 00:15:50.350 "trtype": "TCP", 00:15:50.350 "adrfam": "IPv4", 00:15:50.350 "traddr": "10.0.0.1", 00:15:50.350 "trsvcid": "40018" 00:15:50.350 }, 00:15:50.350 "auth": { 00:15:50.350 "state": "completed", 00:15:50.350 "digest": "sha256", 00:15:50.350 "dhgroup": "ffdhe8192" 00:15:50.351 } 00:15:50.351 } 00:15:50.351 ]' 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.351 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.608 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:50.608 14:56:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.174 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.175 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.175 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.433 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.001 00:15:52.001 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.001 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.001 14:56:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.001 { 00:15:52.001 "cntlid": 43, 00:15:52.001 "qid": 0, 00:15:52.001 "state": "enabled", 00:15:52.001 "thread": "nvmf_tgt_poll_group_000", 00:15:52.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:52.001 "listen_address": { 00:15:52.001 "trtype": "TCP", 00:15:52.001 "adrfam": "IPv4", 00:15:52.001 "traddr": "10.0.0.2", 00:15:52.001 "trsvcid": "4420" 00:15:52.001 }, 00:15:52.001 "peer_address": { 00:15:52.001 "trtype": "TCP", 00:15:52.001 "adrfam": "IPv4", 00:15:52.001 "traddr": "10.0.0.1", 00:15:52.001 "trsvcid": "40044" 00:15:52.001 }, 00:15:52.001 "auth": { 00:15:52.001 "state": "completed", 00:15:52.001 "digest": "sha256", 00:15:52.001 "dhgroup": "ffdhe8192" 00:15:52.001 } 00:15:52.001 } 00:15:52.001 ]' 00:15:52.001 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.264 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.523 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:52.523 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.091 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.091 14:56:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.350 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.609 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.868 { 00:15:53.868 "cntlid": 45, 00:15:53.868 "qid": 0, 00:15:53.868 "state": "enabled", 00:15:53.868 "thread": "nvmf_tgt_poll_group_000", 00:15:53.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.868 "listen_address": { 00:15:53.868 "trtype": "TCP", 00:15:53.868 "adrfam": "IPv4", 00:15:53.868 "traddr": "10.0.0.2", 00:15:53.868 "trsvcid": "4420" 00:15:53.868 }, 00:15:53.868 "peer_address": { 00:15:53.868 "trtype": "TCP", 00:15:53.868 "adrfam": "IPv4", 00:15:53.868 "traddr": "10.0.0.1", 00:15:53.868 "trsvcid": "40068" 00:15:53.868 }, 00:15:53.868 "auth": { 00:15:53.868 "state": "completed", 00:15:53.868 "digest": "sha256", 00:15:53.868 "dhgroup": "ffdhe8192" 00:15:53.868 } 00:15:53.868 } 00:15:53.868 ]' 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:53.868 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.126 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.126 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.126 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.126 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.126 14:56:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.385 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:54.385 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:15:54.953 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.953 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.953 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.953 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.953 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:54.954 14:56:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.521 00:15:55.521 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.521 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.521 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.780 { 00:15:55.780 "cntlid": 47, 00:15:55.780 "qid": 0, 00:15:55.780 "state": "enabled", 00:15:55.780 "thread": "nvmf_tgt_poll_group_000", 00:15:55.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.780 "listen_address": { 00:15:55.780 "trtype": "TCP", 00:15:55.780 "adrfam": "IPv4", 00:15:55.780 "traddr": "10.0.0.2", 00:15:55.780 "trsvcid": "4420" 00:15:55.780 }, 00:15:55.780 "peer_address": { 00:15:55.780 "trtype": "TCP", 00:15:55.780 "adrfam": "IPv4", 00:15:55.780 "traddr": "10.0.0.1", 00:15:55.780 "trsvcid": "40078" 00:15:55.780 }, 00:15:55.780 "auth": { 00:15:55.780 "state": "completed", 00:15:55.780 "digest": "sha256", 00:15:55.780 "dhgroup": "ffdhe8192" 00:15:55.780 } 00:15:55.780 } 00:15:55.780 ]' 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.780 14:56:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.039 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:56.039 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.607 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.866 14:56:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.124 00:15:57.124 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.124 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.124 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.382 { 00:15:57.382 "cntlid": 49, 00:15:57.382 "qid": 0, 00:15:57.382 "state": "enabled", 00:15:57.382 "thread": "nvmf_tgt_poll_group_000", 00:15:57.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.382 "listen_address": { 00:15:57.382 "trtype": "TCP", 00:15:57.382 "adrfam": "IPv4", 00:15:57.382 "traddr": "10.0.0.2", 00:15:57.382 "trsvcid": "4420" 00:15:57.382 }, 00:15:57.382 "peer_address": { 00:15:57.382 "trtype": "TCP", 00:15:57.382 "adrfam": "IPv4", 00:15:57.382 "traddr": "10.0.0.1", 00:15:57.382 "trsvcid": "37714" 00:15:57.382 }, 00:15:57.382 "auth": { 00:15:57.382 "state": "completed", 00:15:57.382 "digest": "sha384", 00:15:57.382 "dhgroup": "null" 00:15:57.382 } 00:15:57.382 } 00:15:57.382 ]' 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:57.382 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.641 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.641 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.641 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.641 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:57.641 14:56:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.229 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.523 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.782 00:15:58.782 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.782 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.782 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.040 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:59.040 { 00:15:59.040 "cntlid": 51, 00:15:59.040 "qid": 0, 00:15:59.040 "state": "enabled", 00:15:59.041 "thread": "nvmf_tgt_poll_group_000", 00:15:59.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:59.041 "listen_address": { 00:15:59.041 "trtype": "TCP", 00:15:59.041 "adrfam": "IPv4", 00:15:59.041 "traddr": "10.0.0.2", 00:15:59.041 "trsvcid": "4420" 00:15:59.041 }, 00:15:59.041 "peer_address": { 00:15:59.041 "trtype": "TCP", 00:15:59.041 "adrfam": "IPv4", 00:15:59.041 "traddr": "10.0.0.1", 00:15:59.041 "trsvcid": "37736" 00:15:59.041 }, 00:15:59.041 "auth": { 00:15:59.041 "state": "completed", 00:15:59.041 "digest": "sha384", 00:15:59.041 "dhgroup": "null" 00:15:59.041 } 00:15:59.041 } 00:15:59.041 ]' 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.041 14:56:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.299 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:59.299 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:59.867 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.125 14:56:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.384 00:16:00.384 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.384 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.384 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.642 { 00:16:00.642 "cntlid": 53, 00:16:00.642 "qid": 0, 00:16:00.642 "state": "enabled", 00:16:00.642 "thread": "nvmf_tgt_poll_group_000", 00:16:00.642 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.642 "listen_address": { 00:16:00.642 "trtype": "TCP", 00:16:00.642 "adrfam": "IPv4", 00:16:00.642 "traddr": "10.0.0.2", 00:16:00.642 "trsvcid": "4420" 00:16:00.642 }, 00:16:00.642 "peer_address": { 00:16:00.642 "trtype": "TCP", 00:16:00.642 "adrfam": "IPv4", 00:16:00.642 "traddr": "10.0.0.1", 00:16:00.642 "trsvcid": "37752" 00:16:00.642 }, 00:16:00.642 "auth": { 00:16:00.642 "state": "completed", 00:16:00.642 "digest": "sha384", 00:16:00.642 "dhgroup": "null" 00:16:00.642 } 00:16:00.642 } 00:16:00.642 ]' 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:00.642 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.643 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.643 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.643 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.901 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:00.901 14:56:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.467 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.726 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.726 00:16:01.984 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.984 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.984 14:56:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.984 { 00:16:01.984 "cntlid": 55, 00:16:01.984 "qid": 0, 00:16:01.984 "state": "enabled", 00:16:01.984 "thread": "nvmf_tgt_poll_group_000", 00:16:01.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:01.984 "listen_address": { 00:16:01.984 "trtype": "TCP", 00:16:01.984 "adrfam": "IPv4", 00:16:01.984 "traddr": "10.0.0.2", 00:16:01.984 "trsvcid": "4420" 00:16:01.984 }, 00:16:01.984 "peer_address": { 00:16:01.984 "trtype": "TCP", 00:16:01.984 "adrfam": "IPv4", 00:16:01.984 "traddr": "10.0.0.1", 00:16:01.984 "trsvcid": "37790" 00:16:01.984 }, 00:16:01.984 "auth": { 00:16:01.984 "state": "completed", 00:16:01.984 "digest": "sha384", 00:16:01.984 "dhgroup": "null" 00:16:01.984 } 00:16:01.984 } 00:16:01.984 ]' 00:16:01.984 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.244 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.504 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:02.504 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.071 14:56:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.071 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.330 00:16:03.330 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.330 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.330 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.589 { 00:16:03.589 "cntlid": 57, 00:16:03.589 "qid": 0, 00:16:03.589 "state": "enabled", 00:16:03.589 "thread": "nvmf_tgt_poll_group_000", 00:16:03.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.589 "listen_address": { 00:16:03.589 "trtype": "TCP", 00:16:03.589 "adrfam": "IPv4", 00:16:03.589 "traddr": "10.0.0.2", 00:16:03.589 "trsvcid": "4420" 00:16:03.589 }, 00:16:03.589 "peer_address": { 00:16:03.589 "trtype": "TCP", 00:16:03.589 "adrfam": "IPv4", 00:16:03.589 "traddr": "10.0.0.1", 00:16:03.589 "trsvcid": "37822" 00:16:03.589 }, 00:16:03.589 "auth": { 00:16:03.589 "state": "completed", 00:16:03.589 "digest": "sha384", 00:16:03.589 "dhgroup": "ffdhe2048" 00:16:03.589 } 00:16:03.589 } 00:16:03.589 ]' 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.589 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.847 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:03.847 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.847 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.847 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.847 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.106 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:04.106 14:56:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.672 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.673 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.673 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.931 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.931 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.931 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.931 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.931 00:16:05.190 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.190 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.190 14:56:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.190 { 00:16:05.190 "cntlid": 59, 00:16:05.190 "qid": 0, 00:16:05.190 "state": "enabled", 00:16:05.190 "thread": "nvmf_tgt_poll_group_000", 00:16:05.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.190 "listen_address": { 00:16:05.190 "trtype": "TCP", 00:16:05.190 "adrfam": "IPv4", 00:16:05.190 "traddr": "10.0.0.2", 00:16:05.190 "trsvcid": "4420" 00:16:05.190 }, 00:16:05.190 "peer_address": { 00:16:05.190 "trtype": "TCP", 00:16:05.190 "adrfam": "IPv4", 00:16:05.190 "traddr": "10.0.0.1", 00:16:05.190 "trsvcid": "37834" 00:16:05.190 }, 00:16:05.190 "auth": { 00:16:05.190 "state": "completed", 00:16:05.190 "digest": "sha384", 00:16:05.190 "dhgroup": "ffdhe2048" 00:16:05.190 } 00:16:05.190 } 00:16:05.190 ]' 00:16:05.190 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.448 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.707 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:05.707 14:56:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.273 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.532 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.790 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.790 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.790 { 00:16:06.790 "cntlid": 61, 00:16:06.790 "qid": 0, 00:16:06.790 "state": "enabled", 00:16:06.790 "thread": "nvmf_tgt_poll_group_000", 00:16:06.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.790 "listen_address": { 00:16:06.790 "trtype": "TCP", 00:16:06.790 "adrfam": "IPv4", 00:16:06.790 "traddr": "10.0.0.2", 00:16:06.790 "trsvcid": "4420" 00:16:06.790 }, 00:16:06.790 "peer_address": { 00:16:06.790 "trtype": "TCP", 00:16:06.790 "adrfam": "IPv4", 00:16:06.790 "traddr": "10.0.0.1", 00:16:06.790 "trsvcid": "41242" 00:16:06.790 }, 00:16:06.790 "auth": { 00:16:06.790 "state": "completed", 00:16:06.790 "digest": "sha384", 00:16:06.790 "dhgroup": "ffdhe2048" 00:16:06.790 } 00:16:06.790 } 00:16:06.790 ]' 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.049 14:56:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.307 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:07.307 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:07.873 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.131 14:57:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.390 00:16:08.390 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.390 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.390 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.648 { 00:16:08.648 "cntlid": 63, 00:16:08.648 "qid": 0, 00:16:08.648 "state": "enabled", 00:16:08.648 "thread": "nvmf_tgt_poll_group_000", 00:16:08.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.648 "listen_address": { 00:16:08.648 "trtype": "TCP", 00:16:08.648 "adrfam": "IPv4", 00:16:08.648 "traddr": "10.0.0.2", 00:16:08.648 "trsvcid": "4420" 00:16:08.648 }, 00:16:08.648 "peer_address": { 00:16:08.648 "trtype": "TCP", 00:16:08.648 "adrfam": "IPv4", 00:16:08.648 "traddr": "10.0.0.1", 00:16:08.648 "trsvcid": "41266" 00:16:08.648 }, 00:16:08.648 "auth": { 00:16:08.648 "state": "completed", 00:16:08.648 "digest": "sha384", 00:16:08.648 "dhgroup": "ffdhe2048" 00:16:08.648 } 00:16:08.648 } 00:16:08.648 ]' 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.648 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.906 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:08.906 14:57:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.473 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.731 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.989 00:16:09.989 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.989 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.989 14:57:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.247 { 00:16:10.247 "cntlid": 65, 00:16:10.247 "qid": 0, 00:16:10.247 "state": "enabled", 00:16:10.247 "thread": "nvmf_tgt_poll_group_000", 00:16:10.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.247 "listen_address": { 00:16:10.247 "trtype": "TCP", 00:16:10.247 "adrfam": "IPv4", 00:16:10.247 "traddr": "10.0.0.2", 00:16:10.247 "trsvcid": "4420" 00:16:10.247 }, 00:16:10.247 "peer_address": { 00:16:10.247 "trtype": "TCP", 00:16:10.247 "adrfam": "IPv4", 00:16:10.247 "traddr": "10.0.0.1", 00:16:10.247 "trsvcid": "41298" 00:16:10.247 }, 00:16:10.247 "auth": { 00:16:10.247 "state": "completed", 00:16:10.247 "digest": "sha384", 00:16:10.247 "dhgroup": "ffdhe3072" 00:16:10.247 } 00:16:10.247 } 00:16:10.247 ]' 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.247 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.505 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:10.505 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.071 14:57:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.329 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.587 00:16:11.587 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.587 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.587 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.846 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.846 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.846 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.846 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.847 { 00:16:11.847 "cntlid": 67, 00:16:11.847 "qid": 0, 00:16:11.847 "state": "enabled", 00:16:11.847 "thread": "nvmf_tgt_poll_group_000", 00:16:11.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.847 "listen_address": { 00:16:11.847 "trtype": "TCP", 00:16:11.847 "adrfam": "IPv4", 00:16:11.847 "traddr": "10.0.0.2", 00:16:11.847 "trsvcid": "4420" 00:16:11.847 }, 00:16:11.847 "peer_address": { 00:16:11.847 "trtype": "TCP", 00:16:11.847 "adrfam": "IPv4", 00:16:11.847 "traddr": "10.0.0.1", 00:16:11.847 "trsvcid": "41330" 00:16:11.847 }, 00:16:11.847 "auth": { 00:16:11.847 "state": "completed", 00:16:11.847 "digest": "sha384", 00:16:11.847 "dhgroup": "ffdhe3072" 00:16:11.847 } 00:16:11.847 } 00:16:11.847 ]' 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.847 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.105 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:12.105 14:57:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.671 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.929 14:57:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.187 00:16:13.187 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.187 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.187 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.445 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.445 { 00:16:13.445 "cntlid": 69, 00:16:13.445 "qid": 0, 00:16:13.445 "state": "enabled", 00:16:13.445 "thread": "nvmf_tgt_poll_group_000", 00:16:13.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.445 "listen_address": { 00:16:13.445 "trtype": "TCP", 00:16:13.445 "adrfam": "IPv4", 00:16:13.445 "traddr": "10.0.0.2", 00:16:13.445 "trsvcid": "4420" 00:16:13.445 }, 00:16:13.445 "peer_address": { 00:16:13.445 "trtype": "TCP", 00:16:13.445 "adrfam": "IPv4", 00:16:13.445 "traddr": "10.0.0.1", 00:16:13.445 "trsvcid": "41352" 00:16:13.445 }, 00:16:13.445 "auth": { 00:16:13.445 "state": "completed", 00:16:13.445 "digest": "sha384", 00:16:13.445 "dhgroup": "ffdhe3072" 00:16:13.445 } 00:16:13.445 } 00:16:13.445 ]' 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.446 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.704 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:13.704 14:57:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.270 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.528 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.786 00:16:14.786 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.786 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.786 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.044 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.044 { 00:16:15.044 "cntlid": 71, 00:16:15.044 "qid": 0, 00:16:15.044 "state": "enabled", 00:16:15.044 "thread": "nvmf_tgt_poll_group_000", 00:16:15.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.044 "listen_address": { 00:16:15.044 "trtype": "TCP", 00:16:15.045 "adrfam": "IPv4", 00:16:15.045 "traddr": "10.0.0.2", 00:16:15.045 "trsvcid": "4420" 00:16:15.045 }, 00:16:15.045 "peer_address": { 00:16:15.045 "trtype": "TCP", 00:16:15.045 "adrfam": "IPv4", 00:16:15.045 "traddr": "10.0.0.1", 00:16:15.045 "trsvcid": "41384" 00:16:15.045 }, 00:16:15.045 "auth": { 00:16:15.045 "state": "completed", 00:16:15.045 "digest": "sha384", 00:16:15.045 "dhgroup": "ffdhe3072" 00:16:15.045 } 00:16:15.045 } 00:16:15.045 ]' 00:16:15.045 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.045 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:15.045 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.045 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.045 14:57:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.045 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.045 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.045 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.303 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:15.303 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:15.869 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.128 14:57:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.128 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.128 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.128 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.128 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.386 00:16:16.386 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.386 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.386 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.644 { 00:16:16.644 "cntlid": 73, 00:16:16.644 "qid": 0, 00:16:16.644 "state": "enabled", 00:16:16.644 "thread": "nvmf_tgt_poll_group_000", 00:16:16.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.644 "listen_address": { 00:16:16.644 "trtype": "TCP", 00:16:16.644 "adrfam": "IPv4", 00:16:16.644 "traddr": "10.0.0.2", 00:16:16.644 "trsvcid": "4420" 00:16:16.644 }, 00:16:16.644 "peer_address": { 00:16:16.644 "trtype": "TCP", 00:16:16.644 "adrfam": "IPv4", 00:16:16.644 "traddr": "10.0.0.1", 00:16:16.644 "trsvcid": "37166" 00:16:16.644 }, 00:16:16.644 "auth": { 00:16:16.644 "state": "completed", 00:16:16.644 "digest": "sha384", 00:16:16.644 "dhgroup": "ffdhe4096" 00:16:16.644 } 00:16:16.644 } 00:16:16.644 ]' 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.644 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.645 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:16.645 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.645 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.645 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.645 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.903 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:16.903 14:57:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.593 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.900 00:16:17.900 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.900 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.900 14:57:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.171 { 00:16:18.171 "cntlid": 75, 00:16:18.171 "qid": 0, 00:16:18.171 "state": "enabled", 00:16:18.171 "thread": "nvmf_tgt_poll_group_000", 00:16:18.171 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.171 "listen_address": { 00:16:18.171 "trtype": "TCP", 00:16:18.171 "adrfam": "IPv4", 00:16:18.171 "traddr": "10.0.0.2", 00:16:18.171 "trsvcid": "4420" 00:16:18.171 }, 00:16:18.171 "peer_address": { 00:16:18.171 "trtype": "TCP", 00:16:18.171 "adrfam": "IPv4", 00:16:18.171 "traddr": "10.0.0.1", 00:16:18.171 "trsvcid": "37180" 00:16:18.171 }, 00:16:18.171 "auth": { 00:16:18.171 "state": "completed", 00:16:18.171 "digest": "sha384", 00:16:18.171 "dhgroup": "ffdhe4096" 00:16:18.171 } 00:16:18.171 } 00:16:18.171 ]' 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.171 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.428 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:18.429 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:18.994 14:57:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.251 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.509 00:16:19.509 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.509 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.509 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.767 { 00:16:19.767 "cntlid": 77, 00:16:19.767 "qid": 0, 00:16:19.767 "state": "enabled", 00:16:19.767 "thread": "nvmf_tgt_poll_group_000", 00:16:19.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.767 "listen_address": { 00:16:19.767 "trtype": "TCP", 00:16:19.767 "adrfam": "IPv4", 00:16:19.767 "traddr": "10.0.0.2", 00:16:19.767 "trsvcid": "4420" 00:16:19.767 }, 00:16:19.767 "peer_address": { 00:16:19.767 "trtype": "TCP", 00:16:19.767 "adrfam": "IPv4", 00:16:19.767 "traddr": "10.0.0.1", 00:16:19.767 "trsvcid": "37196" 00:16:19.767 }, 00:16:19.767 "auth": { 00:16:19.767 "state": "completed", 00:16:19.767 "digest": "sha384", 00:16:19.767 "dhgroup": "ffdhe4096" 00:16:19.767 } 00:16:19.767 } 00:16:19.767 ]' 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.767 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.026 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.026 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.026 14:57:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.026 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:20.026 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.592 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.850 14:57:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.109 00:16:21.109 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.109 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.109 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.369 { 00:16:21.369 "cntlid": 79, 00:16:21.369 "qid": 0, 00:16:21.369 "state": "enabled", 00:16:21.369 "thread": "nvmf_tgt_poll_group_000", 00:16:21.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.369 "listen_address": { 00:16:21.369 "trtype": "TCP", 00:16:21.369 "adrfam": "IPv4", 00:16:21.369 "traddr": "10.0.0.2", 00:16:21.369 "trsvcid": "4420" 00:16:21.369 }, 00:16:21.369 "peer_address": { 00:16:21.369 "trtype": "TCP", 00:16:21.369 "adrfam": "IPv4", 00:16:21.369 "traddr": "10.0.0.1", 00:16:21.369 "trsvcid": "37224" 00:16:21.369 }, 00:16:21.369 "auth": { 00:16:21.369 "state": "completed", 00:16:21.369 "digest": "sha384", 00:16:21.369 "dhgroup": "ffdhe4096" 00:16:21.369 } 00:16:21.369 } 00:16:21.369 ]' 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:21.369 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.627 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:21.627 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.627 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.627 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.627 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.885 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:21.886 14:57:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.451 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.452 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.018 00:16:23.018 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.018 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.018 14:57:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.018 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.018 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.018 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.018 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.276 { 00:16:23.276 "cntlid": 81, 00:16:23.276 "qid": 0, 00:16:23.276 "state": "enabled", 00:16:23.276 "thread": "nvmf_tgt_poll_group_000", 00:16:23.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.276 "listen_address": { 00:16:23.276 "trtype": "TCP", 00:16:23.276 "adrfam": "IPv4", 00:16:23.276 "traddr": "10.0.0.2", 00:16:23.276 "trsvcid": "4420" 00:16:23.276 }, 00:16:23.276 "peer_address": { 00:16:23.276 "trtype": "TCP", 00:16:23.276 "adrfam": "IPv4", 00:16:23.276 "traddr": "10.0.0.1", 00:16:23.276 "trsvcid": "37252" 00:16:23.276 }, 00:16:23.276 "auth": { 00:16:23.276 "state": "completed", 00:16:23.276 "digest": "sha384", 00:16:23.276 "dhgroup": "ffdhe6144" 00:16:23.276 } 00:16:23.276 } 00:16:23.276 ]' 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.276 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.534 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:23.534 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.100 14:57:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.358 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.616 00:16:24.616 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.616 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.616 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.874 { 00:16:24.874 "cntlid": 83, 00:16:24.874 "qid": 0, 00:16:24.874 "state": "enabled", 00:16:24.874 "thread": "nvmf_tgt_poll_group_000", 00:16:24.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.874 "listen_address": { 00:16:24.874 "trtype": "TCP", 00:16:24.874 "adrfam": "IPv4", 00:16:24.874 "traddr": "10.0.0.2", 00:16:24.874 "trsvcid": "4420" 00:16:24.874 }, 00:16:24.874 "peer_address": { 00:16:24.874 "trtype": "TCP", 00:16:24.874 "adrfam": "IPv4", 00:16:24.874 "traddr": "10.0.0.1", 00:16:24.874 "trsvcid": "37280" 00:16:24.874 }, 00:16:24.874 "auth": { 00:16:24.874 "state": "completed", 00:16:24.874 "digest": "sha384", 00:16:24.874 "dhgroup": "ffdhe6144" 00:16:24.874 } 00:16:24.874 } 00:16:24.874 ]' 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:24.874 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.875 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:24.875 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.875 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.875 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.875 14:57:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.133 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:25.133 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.700 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.959 14:57:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.218 00:16:26.218 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.218 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.218 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.477 { 00:16:26.477 "cntlid": 85, 00:16:26.477 "qid": 0, 00:16:26.477 "state": "enabled", 00:16:26.477 "thread": "nvmf_tgt_poll_group_000", 00:16:26.477 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.477 "listen_address": { 00:16:26.477 "trtype": "TCP", 00:16:26.477 "adrfam": "IPv4", 00:16:26.477 "traddr": "10.0.0.2", 00:16:26.477 "trsvcid": "4420" 00:16:26.477 }, 00:16:26.477 "peer_address": { 00:16:26.477 "trtype": "TCP", 00:16:26.477 "adrfam": "IPv4", 00:16:26.477 "traddr": "10.0.0.1", 00:16:26.477 "trsvcid": "59272" 00:16:26.477 }, 00:16:26.477 "auth": { 00:16:26.477 "state": "completed", 00:16:26.477 "digest": "sha384", 00:16:26.477 "dhgroup": "ffdhe6144" 00:16:26.477 } 00:16:26.477 } 00:16:26.477 ]' 00:16:26.477 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.736 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.995 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:26.995 14:57:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.563 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.131 00:16:28.131 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.131 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.131 14:57:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.131 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.131 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.131 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.131 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.391 { 00:16:28.391 "cntlid": 87, 00:16:28.391 "qid": 0, 00:16:28.391 "state": "enabled", 00:16:28.391 "thread": "nvmf_tgt_poll_group_000", 00:16:28.391 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.391 "listen_address": { 00:16:28.391 "trtype": "TCP", 00:16:28.391 "adrfam": "IPv4", 00:16:28.391 "traddr": "10.0.0.2", 00:16:28.391 "trsvcid": "4420" 00:16:28.391 }, 00:16:28.391 "peer_address": { 00:16:28.391 "trtype": "TCP", 00:16:28.391 "adrfam": "IPv4", 00:16:28.391 "traddr": "10.0.0.1", 00:16:28.391 "trsvcid": "59306" 00:16:28.391 }, 00:16:28.391 "auth": { 00:16:28.391 "state": "completed", 00:16:28.391 "digest": "sha384", 00:16:28.391 "dhgroup": "ffdhe6144" 00:16:28.391 } 00:16:28.391 } 00:16:28.391 ]' 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.391 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.650 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:28.650 14:57:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:29.217 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.476 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.735 00:16:29.994 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.994 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.994 14:57:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.994 { 00:16:29.994 "cntlid": 89, 00:16:29.994 "qid": 0, 00:16:29.994 "state": "enabled", 00:16:29.994 "thread": "nvmf_tgt_poll_group_000", 00:16:29.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.994 "listen_address": { 00:16:29.994 "trtype": "TCP", 00:16:29.994 "adrfam": "IPv4", 00:16:29.994 "traddr": "10.0.0.2", 00:16:29.994 "trsvcid": "4420" 00:16:29.994 }, 00:16:29.994 "peer_address": { 00:16:29.994 "trtype": "TCP", 00:16:29.994 "adrfam": "IPv4", 00:16:29.994 "traddr": "10.0.0.1", 00:16:29.994 "trsvcid": "59326" 00:16:29.994 }, 00:16:29.994 "auth": { 00:16:29.994 "state": "completed", 00:16:29.994 "digest": "sha384", 00:16:29.994 "dhgroup": "ffdhe8192" 00:16:29.994 } 00:16:29.994 } 00:16:29.994 ]' 00:16:29.994 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.253 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.512 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:30.512 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.080 14:57:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.339 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.907 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.907 { 00:16:31.907 "cntlid": 91, 00:16:31.907 "qid": 0, 00:16:31.907 "state": "enabled", 00:16:31.907 "thread": "nvmf_tgt_poll_group_000", 00:16:31.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.907 "listen_address": { 00:16:31.907 "trtype": "TCP", 00:16:31.907 "adrfam": "IPv4", 00:16:31.907 "traddr": "10.0.0.2", 00:16:31.907 "trsvcid": "4420" 00:16:31.907 }, 00:16:31.907 "peer_address": { 00:16:31.907 "trtype": "TCP", 00:16:31.907 "adrfam": "IPv4", 00:16:31.907 "traddr": "10.0.0.1", 00:16:31.907 "trsvcid": "59352" 00:16:31.907 }, 00:16:31.907 "auth": { 00:16:31.907 "state": "completed", 00:16:31.907 "digest": "sha384", 00:16:31.907 "dhgroup": "ffdhe8192" 00:16:31.907 } 00:16:31.907 } 00:16:31.907 ]' 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.907 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.166 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.166 14:57:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.166 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.166 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.166 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.424 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:32.424 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.991 14:57:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.991 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.558 00:16:33.558 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.558 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.558 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.817 { 00:16:33.817 "cntlid": 93, 00:16:33.817 "qid": 0, 00:16:33.817 "state": "enabled", 00:16:33.817 "thread": "nvmf_tgt_poll_group_000", 00:16:33.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.817 "listen_address": { 00:16:33.817 "trtype": "TCP", 00:16:33.817 "adrfam": "IPv4", 00:16:33.817 "traddr": "10.0.0.2", 00:16:33.817 "trsvcid": "4420" 00:16:33.817 }, 00:16:33.817 "peer_address": { 00:16:33.817 "trtype": "TCP", 00:16:33.817 "adrfam": "IPv4", 00:16:33.817 "traddr": "10.0.0.1", 00:16:33.817 "trsvcid": "59380" 00:16:33.817 }, 00:16:33.817 "auth": { 00:16:33.817 "state": "completed", 00:16:33.817 "digest": "sha384", 00:16:33.817 "dhgroup": "ffdhe8192" 00:16:33.817 } 00:16:33.817 } 00:16:33.817 ]' 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.817 14:57:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.076 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:34.076 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.644 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.903 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.904 14:57:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:35.472 00:16:35.472 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.472 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.472 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.731 { 00:16:35.731 "cntlid": 95, 00:16:35.731 "qid": 0, 00:16:35.731 "state": "enabled", 00:16:35.731 "thread": "nvmf_tgt_poll_group_000", 00:16:35.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.731 "listen_address": { 00:16:35.731 "trtype": "TCP", 00:16:35.731 "adrfam": "IPv4", 00:16:35.731 "traddr": "10.0.0.2", 00:16:35.731 "trsvcid": "4420" 00:16:35.731 }, 00:16:35.731 "peer_address": { 00:16:35.731 "trtype": "TCP", 00:16:35.731 "adrfam": "IPv4", 00:16:35.731 "traddr": "10.0.0.1", 00:16:35.731 "trsvcid": "59412" 00:16:35.731 }, 00:16:35.731 "auth": { 00:16:35.731 "state": "completed", 00:16:35.731 "digest": "sha384", 00:16:35.731 "dhgroup": "ffdhe8192" 00:16:35.731 } 00:16:35.731 } 00:16:35.731 ]' 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.731 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.990 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:35.991 14:57:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.559 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.819 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.078 00:16:37.078 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.078 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.078 14:57:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.078 { 00:16:37.078 "cntlid": 97, 00:16:37.078 "qid": 0, 00:16:37.078 "state": "enabled", 00:16:37.078 "thread": "nvmf_tgt_poll_group_000", 00:16:37.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:37.078 "listen_address": { 00:16:37.078 "trtype": "TCP", 00:16:37.078 "adrfam": "IPv4", 00:16:37.078 "traddr": "10.0.0.2", 00:16:37.078 "trsvcid": "4420" 00:16:37.078 }, 00:16:37.078 "peer_address": { 00:16:37.078 "trtype": "TCP", 00:16:37.078 "adrfam": "IPv4", 00:16:37.078 "traddr": "10.0.0.1", 00:16:37.078 "trsvcid": "47586" 00:16:37.078 }, 00:16:37.078 "auth": { 00:16:37.078 "state": "completed", 00:16:37.078 "digest": "sha512", 00:16:37.078 "dhgroup": "null" 00:16:37.078 } 00:16:37.078 } 00:16:37.078 ]' 00:16:37.078 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.337 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.596 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:37.596 14:57:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.164 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.424 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.683 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.683 { 00:16:38.683 "cntlid": 99, 00:16:38.683 "qid": 0, 00:16:38.683 "state": "enabled", 00:16:38.683 "thread": "nvmf_tgt_poll_group_000", 00:16:38.683 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.683 "listen_address": { 00:16:38.683 "trtype": "TCP", 00:16:38.683 "adrfam": "IPv4", 00:16:38.683 "traddr": "10.0.0.2", 00:16:38.683 "trsvcid": "4420" 00:16:38.683 }, 00:16:38.683 "peer_address": { 00:16:38.683 "trtype": "TCP", 00:16:38.683 "adrfam": "IPv4", 00:16:38.683 "traddr": "10.0.0.1", 00:16:38.683 "trsvcid": "47606" 00:16:38.683 }, 00:16:38.683 "auth": { 00:16:38.683 "state": "completed", 00:16:38.683 "digest": "sha512", 00:16:38.683 "dhgroup": "null" 00:16:38.683 } 00:16:38.683 } 00:16:38.683 ]' 00:16:38.683 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.942 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.942 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.943 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:38.943 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.943 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.943 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.943 14:57:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.201 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:39.202 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.769 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:39.770 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.028 14:57:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.288 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.288 { 00:16:40.288 "cntlid": 101, 00:16:40.288 "qid": 0, 00:16:40.288 "state": "enabled", 00:16:40.288 "thread": "nvmf_tgt_poll_group_000", 00:16:40.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.288 "listen_address": { 00:16:40.288 "trtype": "TCP", 00:16:40.288 "adrfam": "IPv4", 00:16:40.288 "traddr": "10.0.0.2", 00:16:40.288 "trsvcid": "4420" 00:16:40.288 }, 00:16:40.288 "peer_address": { 00:16:40.288 "trtype": "TCP", 00:16:40.288 "adrfam": "IPv4", 00:16:40.288 "traddr": "10.0.0.1", 00:16:40.288 "trsvcid": "47638" 00:16:40.288 }, 00:16:40.288 "auth": { 00:16:40.288 "state": "completed", 00:16:40.288 "digest": "sha512", 00:16:40.288 "dhgroup": "null" 00:16:40.288 } 00:16:40.288 } 00:16:40.288 ]' 00:16:40.288 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.547 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.806 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:40.806 14:57:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.374 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.375 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.633 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.892 { 00:16:41.892 "cntlid": 103, 00:16:41.892 "qid": 0, 00:16:41.892 "state": "enabled", 00:16:41.892 "thread": "nvmf_tgt_poll_group_000", 00:16:41.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.892 "listen_address": { 00:16:41.892 "trtype": "TCP", 00:16:41.892 "adrfam": "IPv4", 00:16:41.892 "traddr": "10.0.0.2", 00:16:41.892 "trsvcid": "4420" 00:16:41.892 }, 00:16:41.892 "peer_address": { 00:16:41.892 "trtype": "TCP", 00:16:41.892 "adrfam": "IPv4", 00:16:41.892 "traddr": "10.0.0.1", 00:16:41.892 "trsvcid": "47664" 00:16:41.892 }, 00:16:41.892 "auth": { 00:16:41.892 "state": "completed", 00:16:41.892 "digest": "sha512", 00:16:41.892 "dhgroup": "null" 00:16:41.892 } 00:16:41.892 } 00:16:41.892 ]' 00:16:41.892 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.151 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.151 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.151 14:57:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:42.151 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.151 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.151 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.151 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.410 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:42.410 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.977 14:57:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.236 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.496 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.496 { 00:16:43.496 "cntlid": 105, 00:16:43.496 "qid": 0, 00:16:43.496 "state": "enabled", 00:16:43.496 "thread": "nvmf_tgt_poll_group_000", 00:16:43.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.496 "listen_address": { 00:16:43.496 "trtype": "TCP", 00:16:43.496 "adrfam": "IPv4", 00:16:43.496 "traddr": "10.0.0.2", 00:16:43.496 "trsvcid": "4420" 00:16:43.496 }, 00:16:43.496 "peer_address": { 00:16:43.496 "trtype": "TCP", 00:16:43.496 "adrfam": "IPv4", 00:16:43.496 "traddr": "10.0.0.1", 00:16:43.496 "trsvcid": "47690" 00:16:43.496 }, 00:16:43.496 "auth": { 00:16:43.496 "state": "completed", 00:16:43.496 "digest": "sha512", 00:16:43.496 "dhgroup": "ffdhe2048" 00:16:43.496 } 00:16:43.496 } 00:16:43.496 ]' 00:16:43.496 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.755 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.014 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:44.014 14:57:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:44.582 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.583 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.842 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.102 00:16:45.102 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.102 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.102 14:57:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.102 { 00:16:45.102 "cntlid": 107, 00:16:45.102 "qid": 0, 00:16:45.102 "state": "enabled", 00:16:45.102 "thread": "nvmf_tgt_poll_group_000", 00:16:45.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.102 "listen_address": { 00:16:45.102 "trtype": "TCP", 00:16:45.102 "adrfam": "IPv4", 00:16:45.102 "traddr": "10.0.0.2", 00:16:45.102 "trsvcid": "4420" 00:16:45.102 }, 00:16:45.102 "peer_address": { 00:16:45.102 "trtype": "TCP", 00:16:45.102 "adrfam": "IPv4", 00:16:45.102 "traddr": "10.0.0.1", 00:16:45.102 "trsvcid": "47714" 00:16:45.102 }, 00:16:45.102 "auth": { 00:16:45.102 "state": "completed", 00:16:45.102 "digest": "sha512", 00:16:45.102 "dhgroup": "ffdhe2048" 00:16:45.102 } 00:16:45.102 } 00:16:45.102 ]' 00:16:45.102 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.361 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.621 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:45.621 14:57:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.190 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.449 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.449 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.708 { 00:16:46.708 "cntlid": 109, 00:16:46.708 "qid": 0, 00:16:46.708 "state": "enabled", 00:16:46.708 "thread": "nvmf_tgt_poll_group_000", 00:16:46.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.708 "listen_address": { 00:16:46.708 "trtype": "TCP", 00:16:46.708 "adrfam": "IPv4", 00:16:46.708 "traddr": "10.0.0.2", 00:16:46.708 "trsvcid": "4420" 00:16:46.708 }, 00:16:46.708 "peer_address": { 00:16:46.708 "trtype": "TCP", 00:16:46.708 "adrfam": "IPv4", 00:16:46.708 "traddr": "10.0.0.1", 00:16:46.708 "trsvcid": "40500" 00:16:46.708 }, 00:16:46.708 "auth": { 00:16:46.708 "state": "completed", 00:16:46.708 "digest": "sha512", 00:16:46.708 "dhgroup": "ffdhe2048" 00:16:46.708 } 00:16:46.708 } 00:16:46.708 ]' 00:16:46.708 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.967 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.967 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.967 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:46.967 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.967 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.968 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.968 14:57:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.226 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:47.226 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:47.794 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.053 14:57:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:48.313 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.313 { 00:16:48.313 "cntlid": 111, 00:16:48.313 "qid": 0, 00:16:48.313 "state": "enabled", 00:16:48.313 "thread": "nvmf_tgt_poll_group_000", 00:16:48.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.313 "listen_address": { 00:16:48.313 "trtype": "TCP", 00:16:48.313 "adrfam": "IPv4", 00:16:48.313 "traddr": "10.0.0.2", 00:16:48.313 "trsvcid": "4420" 00:16:48.313 }, 00:16:48.313 "peer_address": { 00:16:48.313 "trtype": "TCP", 00:16:48.313 "adrfam": "IPv4", 00:16:48.313 "traddr": "10.0.0.1", 00:16:48.313 "trsvcid": "40516" 00:16:48.313 }, 00:16:48.313 "auth": { 00:16:48.313 "state": "completed", 00:16:48.313 "digest": "sha512", 00:16:48.313 "dhgroup": "ffdhe2048" 00:16:48.313 } 00:16:48.313 } 00:16:48.313 ]' 00:16:48.313 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.572 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.831 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:48.831 14:57:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.400 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.659 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.918 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.918 { 00:16:49.918 "cntlid": 113, 00:16:49.918 "qid": 0, 00:16:49.918 "state": "enabled", 00:16:49.918 "thread": "nvmf_tgt_poll_group_000", 00:16:49.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.918 "listen_address": { 00:16:49.918 "trtype": "TCP", 00:16:49.918 "adrfam": "IPv4", 00:16:49.918 "traddr": "10.0.0.2", 00:16:49.918 "trsvcid": "4420" 00:16:49.918 }, 00:16:49.918 "peer_address": { 00:16:49.918 "trtype": "TCP", 00:16:49.918 "adrfam": "IPv4", 00:16:49.918 "traddr": "10.0.0.1", 00:16:49.918 "trsvcid": "40544" 00:16:49.918 }, 00:16:49.918 "auth": { 00:16:49.918 "state": "completed", 00:16:49.918 "digest": "sha512", 00:16:49.918 "dhgroup": "ffdhe3072" 00:16:49.918 } 00:16:49.918 } 00:16:49.918 ]' 00:16:49.918 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.177 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.177 14:57:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.177 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.177 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.177 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.177 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.177 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.437 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:50.437 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.006 14:57:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.265 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.524 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.524 { 00:16:51.524 "cntlid": 115, 00:16:51.524 "qid": 0, 00:16:51.524 "state": "enabled", 00:16:51.524 "thread": "nvmf_tgt_poll_group_000", 00:16:51.524 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.524 "listen_address": { 00:16:51.524 "trtype": "TCP", 00:16:51.524 "adrfam": "IPv4", 00:16:51.524 "traddr": "10.0.0.2", 00:16:51.524 "trsvcid": "4420" 00:16:51.524 }, 00:16:51.524 "peer_address": { 00:16:51.524 "trtype": "TCP", 00:16:51.524 "adrfam": "IPv4", 00:16:51.524 "traddr": "10.0.0.1", 00:16:51.524 "trsvcid": "40588" 00:16:51.524 }, 00:16:51.524 "auth": { 00:16:51.524 "state": "completed", 00:16:51.524 "digest": "sha512", 00:16:51.524 "dhgroup": "ffdhe3072" 00:16:51.524 } 00:16:51.524 } 00:16:51.524 ]' 00:16:51.524 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.783 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.042 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:52.042 14:57:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.611 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.870 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.129 00:16:53.129 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:53.129 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:53.129 14:57:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.129 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:53.129 { 00:16:53.129 "cntlid": 117, 00:16:53.129 "qid": 0, 00:16:53.129 "state": "enabled", 00:16:53.129 "thread": "nvmf_tgt_poll_group_000", 00:16:53.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.129 "listen_address": { 00:16:53.129 "trtype": "TCP", 00:16:53.129 "adrfam": "IPv4", 00:16:53.129 "traddr": "10.0.0.2", 00:16:53.129 "trsvcid": "4420" 00:16:53.129 }, 00:16:53.129 "peer_address": { 00:16:53.129 "trtype": "TCP", 00:16:53.129 "adrfam": "IPv4", 00:16:53.129 "traddr": "10.0.0.1", 00:16:53.129 "trsvcid": "40594" 00:16:53.130 }, 00:16:53.130 "auth": { 00:16:53.130 "state": "completed", 00:16:53.130 "digest": "sha512", 00:16:53.130 "dhgroup": "ffdhe3072" 00:16:53.130 } 00:16:53.130 } 00:16:53.130 ]' 00:16:53.130 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.389 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.648 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:53.648 14:57:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.217 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.476 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.736 { 00:16:54.736 "cntlid": 119, 00:16:54.736 "qid": 0, 00:16:54.736 "state": "enabled", 00:16:54.736 "thread": "nvmf_tgt_poll_group_000", 00:16:54.736 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.736 "listen_address": { 00:16:54.736 "trtype": "TCP", 00:16:54.736 "adrfam": "IPv4", 00:16:54.736 "traddr": "10.0.0.2", 00:16:54.736 "trsvcid": "4420" 00:16:54.736 }, 00:16:54.736 "peer_address": { 00:16:54.736 "trtype": "TCP", 00:16:54.736 "adrfam": "IPv4", 00:16:54.736 "traddr": "10.0.0.1", 00:16:54.736 "trsvcid": "40608" 00:16:54.736 }, 00:16:54.736 "auth": { 00:16:54.736 "state": "completed", 00:16:54.736 "digest": "sha512", 00:16:54.736 "dhgroup": "ffdhe3072" 00:16:54.736 } 00:16:54.736 } 00:16:54.736 ]' 00:16:54.736 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.995 14:57:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.254 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:55.254 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.823 14:57:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.083 00:16:56.341 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.341 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.341 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.341 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.342 { 00:16:56.342 "cntlid": 121, 00:16:56.342 "qid": 0, 00:16:56.342 "state": "enabled", 00:16:56.342 "thread": "nvmf_tgt_poll_group_000", 00:16:56.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.342 "listen_address": { 00:16:56.342 "trtype": "TCP", 00:16:56.342 "adrfam": "IPv4", 00:16:56.342 "traddr": "10.0.0.2", 00:16:56.342 "trsvcid": "4420" 00:16:56.342 }, 00:16:56.342 "peer_address": { 00:16:56.342 "trtype": "TCP", 00:16:56.342 "adrfam": "IPv4", 00:16:56.342 "traddr": "10.0.0.1", 00:16:56.342 "trsvcid": "54742" 00:16:56.342 }, 00:16:56.342 "auth": { 00:16:56.342 "state": "completed", 00:16:56.342 "digest": "sha512", 00:16:56.342 "dhgroup": "ffdhe4096" 00:16:56.342 } 00:16:56.342 } 00:16:56.342 ]' 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.342 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.600 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.600 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.600 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.600 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.600 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.858 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:56.858 14:57:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.426 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.685 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.944 { 00:16:57.944 "cntlid": 123, 00:16:57.944 "qid": 0, 00:16:57.944 "state": "enabled", 00:16:57.944 "thread": "nvmf_tgt_poll_group_000", 00:16:57.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.944 "listen_address": { 00:16:57.944 "trtype": "TCP", 00:16:57.944 "adrfam": "IPv4", 00:16:57.944 "traddr": "10.0.0.2", 00:16:57.944 "trsvcid": "4420" 00:16:57.944 }, 00:16:57.944 "peer_address": { 00:16:57.944 "trtype": "TCP", 00:16:57.944 "adrfam": "IPv4", 00:16:57.944 "traddr": "10.0.0.1", 00:16:57.944 "trsvcid": "54762" 00:16:57.944 }, 00:16:57.944 "auth": { 00:16:57.944 "state": "completed", 00:16:57.944 "digest": "sha512", 00:16:57.944 "dhgroup": "ffdhe4096" 00:16:57.944 } 00:16:57.944 } 00:16:57.944 ]' 00:16:57.944 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.203 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.203 14:57:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.203 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:58.203 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.203 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.203 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.203 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.462 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:58.462 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.030 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.031 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.031 14:57:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.031 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.290 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.549 { 00:16:59.549 "cntlid": 125, 00:16:59.549 "qid": 0, 00:16:59.549 "state": "enabled", 00:16:59.549 "thread": "nvmf_tgt_poll_group_000", 00:16:59.549 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.549 "listen_address": { 00:16:59.549 "trtype": "TCP", 00:16:59.549 "adrfam": "IPv4", 00:16:59.549 "traddr": "10.0.0.2", 00:16:59.549 "trsvcid": "4420" 00:16:59.549 }, 00:16:59.549 "peer_address": { 00:16:59.549 "trtype": "TCP", 00:16:59.549 "adrfam": "IPv4", 00:16:59.549 "traddr": "10.0.0.1", 00:16:59.549 "trsvcid": "54780" 00:16:59.549 }, 00:16:59.549 "auth": { 00:16:59.549 "state": "completed", 00:16:59.549 "digest": "sha512", 00:16:59.549 "dhgroup": "ffdhe4096" 00:16:59.549 } 00:16:59.549 } 00:16:59.549 ]' 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.549 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.808 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.067 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:00.067 14:57:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.636 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.895 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.895 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.895 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.895 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.155 00:17:01.155 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.155 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.155 14:57:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.155 { 00:17:01.155 "cntlid": 127, 00:17:01.155 "qid": 0, 00:17:01.155 "state": "enabled", 00:17:01.155 "thread": "nvmf_tgt_poll_group_000", 00:17:01.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.155 "listen_address": { 00:17:01.155 "trtype": "TCP", 00:17:01.155 "adrfam": "IPv4", 00:17:01.155 "traddr": "10.0.0.2", 00:17:01.155 "trsvcid": "4420" 00:17:01.155 }, 00:17:01.155 "peer_address": { 00:17:01.155 "trtype": "TCP", 00:17:01.155 "adrfam": "IPv4", 00:17:01.155 "traddr": "10.0.0.1", 00:17:01.155 "trsvcid": "54802" 00:17:01.155 }, 00:17:01.155 "auth": { 00:17:01.155 "state": "completed", 00:17:01.155 "digest": "sha512", 00:17:01.155 "dhgroup": "ffdhe4096" 00:17:01.155 } 00:17:01.155 } 00:17:01.155 ]' 00:17:01.155 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.414 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.673 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:01.673 14:57:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.248 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.511 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.770 00:17:02.770 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.770 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.770 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.030 { 00:17:03.030 "cntlid": 129, 00:17:03.030 "qid": 0, 00:17:03.030 "state": "enabled", 00:17:03.030 "thread": "nvmf_tgt_poll_group_000", 00:17:03.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:03.030 "listen_address": { 00:17:03.030 "trtype": "TCP", 00:17:03.030 "adrfam": "IPv4", 00:17:03.030 "traddr": "10.0.0.2", 00:17:03.030 "trsvcid": "4420" 00:17:03.030 }, 00:17:03.030 "peer_address": { 00:17:03.030 "trtype": "TCP", 00:17:03.030 "adrfam": "IPv4", 00:17:03.030 "traddr": "10.0.0.1", 00:17:03.030 "trsvcid": "54820" 00:17:03.030 }, 00:17:03.030 "auth": { 00:17:03.030 "state": "completed", 00:17:03.030 "digest": "sha512", 00:17:03.030 "dhgroup": "ffdhe6144" 00:17:03.030 } 00:17:03.030 } 00:17:03.030 ]' 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.030 14:57:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.289 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:03.289 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.859 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.118 14:57:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.377 00:17:04.377 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.377 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.377 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.637 { 00:17:04.637 "cntlid": 131, 00:17:04.637 "qid": 0, 00:17:04.637 "state": "enabled", 00:17:04.637 "thread": "nvmf_tgt_poll_group_000", 00:17:04.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.637 "listen_address": { 00:17:04.637 "trtype": "TCP", 00:17:04.637 "adrfam": "IPv4", 00:17:04.637 "traddr": "10.0.0.2", 00:17:04.637 "trsvcid": "4420" 00:17:04.637 }, 00:17:04.637 "peer_address": { 00:17:04.637 "trtype": "TCP", 00:17:04.637 "adrfam": "IPv4", 00:17:04.637 "traddr": "10.0.0.1", 00:17:04.637 "trsvcid": "54852" 00:17:04.637 }, 00:17:04.637 "auth": { 00:17:04.637 "state": "completed", 00:17:04.637 "digest": "sha512", 00:17:04.637 "dhgroup": "ffdhe6144" 00:17:04.637 } 00:17:04.637 } 00:17:04.637 ]' 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:04.637 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.896 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.896 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.896 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.896 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:17:04.896 14:57:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.465 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.724 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:05.724 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.724 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.724 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:05.724 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.725 14:57:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.983 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.243 { 00:17:06.243 "cntlid": 133, 00:17:06.243 "qid": 0, 00:17:06.243 "state": "enabled", 00:17:06.243 "thread": "nvmf_tgt_poll_group_000", 00:17:06.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.243 "listen_address": { 00:17:06.243 "trtype": "TCP", 00:17:06.243 "adrfam": "IPv4", 00:17:06.243 "traddr": "10.0.0.2", 00:17:06.243 "trsvcid": "4420" 00:17:06.243 }, 00:17:06.243 "peer_address": { 00:17:06.243 "trtype": "TCP", 00:17:06.243 "adrfam": "IPv4", 00:17:06.243 "traddr": "10.0.0.1", 00:17:06.243 "trsvcid": "59644" 00:17:06.243 }, 00:17:06.243 "auth": { 00:17:06.243 "state": "completed", 00:17:06.243 "digest": "sha512", 00:17:06.243 "dhgroup": "ffdhe6144" 00:17:06.243 } 00:17:06.243 } 00:17:06.243 ]' 00:17:06.243 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.503 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.762 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:06.762 14:57:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.330 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.331 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.899 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.899 { 00:17:07.899 "cntlid": 135, 00:17:07.899 "qid": 0, 00:17:07.899 "state": "enabled", 00:17:07.899 "thread": "nvmf_tgt_poll_group_000", 00:17:07.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.899 "listen_address": { 00:17:07.899 "trtype": "TCP", 00:17:07.899 "adrfam": "IPv4", 00:17:07.899 "traddr": "10.0.0.2", 00:17:07.899 "trsvcid": "4420" 00:17:07.899 }, 00:17:07.899 "peer_address": { 00:17:07.899 "trtype": "TCP", 00:17:07.899 "adrfam": "IPv4", 00:17:07.899 "traddr": "10.0.0.1", 00:17:07.899 "trsvcid": "59670" 00:17:07.899 }, 00:17:07.899 "auth": { 00:17:07.899 "state": "completed", 00:17:07.899 "digest": "sha512", 00:17:07.899 "dhgroup": "ffdhe6144" 00:17:07.899 } 00:17:07.899 } 00:17:07.899 ]' 00:17:07.899 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:08.158 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.158 14:58:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:08.158 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.158 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:08.158 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.158 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.158 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.417 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:08.417 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:08.985 14:58:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.245 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.505 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.764 { 00:17:09.764 "cntlid": 137, 00:17:09.764 "qid": 0, 00:17:09.764 "state": "enabled", 00:17:09.764 "thread": "nvmf_tgt_poll_group_000", 00:17:09.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.764 "listen_address": { 00:17:09.764 "trtype": "TCP", 00:17:09.764 "adrfam": "IPv4", 00:17:09.764 "traddr": "10.0.0.2", 00:17:09.764 "trsvcid": "4420" 00:17:09.764 }, 00:17:09.764 "peer_address": { 00:17:09.764 "trtype": "TCP", 00:17:09.764 "adrfam": "IPv4", 00:17:09.764 "traddr": "10.0.0.1", 00:17:09.764 "trsvcid": "59706" 00:17:09.764 }, 00:17:09.764 "auth": { 00:17:09.764 "state": "completed", 00:17:09.764 "digest": "sha512", 00:17:09.764 "dhgroup": "ffdhe8192" 00:17:09.764 } 00:17:09.764 } 00:17:09.764 ]' 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.764 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.023 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.023 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.023 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.023 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.023 14:58:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.283 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:10.283 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.851 14:58:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.420 00:17:11.420 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.420 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.420 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.679 { 00:17:11.679 "cntlid": 139, 00:17:11.679 "qid": 0, 00:17:11.679 "state": "enabled", 00:17:11.679 "thread": "nvmf_tgt_poll_group_000", 00:17:11.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:11.679 "listen_address": { 00:17:11.679 "trtype": "TCP", 00:17:11.679 "adrfam": "IPv4", 00:17:11.679 "traddr": "10.0.0.2", 00:17:11.679 "trsvcid": "4420" 00:17:11.679 }, 00:17:11.679 "peer_address": { 00:17:11.679 "trtype": "TCP", 00:17:11.679 "adrfam": "IPv4", 00:17:11.679 "traddr": "10.0.0.1", 00:17:11.679 "trsvcid": "59728" 00:17:11.679 }, 00:17:11.679 "auth": { 00:17:11.679 "state": "completed", 00:17:11.679 "digest": "sha512", 00:17:11.679 "dhgroup": "ffdhe8192" 00:17:11.679 } 00:17:11.679 } 00:17:11.679 ]' 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:11.679 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.939 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.939 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.939 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.939 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:17:11.939 14:58:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: --dhchap-ctrl-secret DHHC-1:02:N2QwNDhmMDE3MjFmZGZmOTgzN2ExMjhhMTE1NGI3NGY5ZWU4MTFiZDA3ODNkMDkzuZAayQ==: 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.507 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.767 14:58:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:13.335 00:17:13.335 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.335 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.335 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.595 { 00:17:13.595 "cntlid": 141, 00:17:13.595 "qid": 0, 00:17:13.595 "state": "enabled", 00:17:13.595 "thread": "nvmf_tgt_poll_group_000", 00:17:13.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:13.595 "listen_address": { 00:17:13.595 "trtype": "TCP", 00:17:13.595 "adrfam": "IPv4", 00:17:13.595 "traddr": "10.0.0.2", 00:17:13.595 "trsvcid": "4420" 00:17:13.595 }, 00:17:13.595 "peer_address": { 00:17:13.595 "trtype": "TCP", 00:17:13.595 "adrfam": "IPv4", 00:17:13.595 "traddr": "10.0.0.1", 00:17:13.595 "trsvcid": "59762" 00:17:13.595 }, 00:17:13.595 "auth": { 00:17:13.595 "state": "completed", 00:17:13.595 "digest": "sha512", 00:17:13.595 "dhgroup": "ffdhe8192" 00:17:13.595 } 00:17:13.595 } 00:17:13.595 ]' 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.595 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.854 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:13.854 14:58:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:01:OTg5NWQ4ZDU0NDA4MzQzYjA3YWE1YjJiZDY4MzcyMTNQfdAx: 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.422 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.680 14:58:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:15.252 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.252 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.252 { 00:17:15.252 "cntlid": 143, 00:17:15.252 "qid": 0, 00:17:15.253 "state": "enabled", 00:17:15.253 "thread": "nvmf_tgt_poll_group_000", 00:17:15.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:15.253 "listen_address": { 00:17:15.253 "trtype": "TCP", 00:17:15.253 "adrfam": "IPv4", 00:17:15.253 "traddr": "10.0.0.2", 00:17:15.253 "trsvcid": "4420" 00:17:15.253 }, 00:17:15.253 "peer_address": { 00:17:15.253 "trtype": "TCP", 00:17:15.253 "adrfam": "IPv4", 00:17:15.253 "traddr": "10.0.0.1", 00:17:15.253 "trsvcid": "59776" 00:17:15.253 }, 00:17:15.253 "auth": { 00:17:15.253 "state": "completed", 00:17:15.253 "digest": "sha512", 00:17:15.253 "dhgroup": "ffdhe8192" 00:17:15.253 } 00:17:15.253 } 00:17:15.253 ]' 00:17:15.253 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.253 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.253 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:15.585 14:58:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.247 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.553 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.812 00:17:17.071 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.071 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.071 14:58:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.071 { 00:17:17.071 "cntlid": 145, 00:17:17.071 "qid": 0, 00:17:17.071 "state": "enabled", 00:17:17.071 "thread": "nvmf_tgt_poll_group_000", 00:17:17.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.071 "listen_address": { 00:17:17.071 "trtype": "TCP", 00:17:17.071 "adrfam": "IPv4", 00:17:17.071 "traddr": "10.0.0.2", 00:17:17.071 "trsvcid": "4420" 00:17:17.071 }, 00:17:17.071 "peer_address": { 00:17:17.071 "trtype": "TCP", 00:17:17.071 "adrfam": "IPv4", 00:17:17.071 "traddr": "10.0.0.1", 00:17:17.071 "trsvcid": "52978" 00:17:17.071 }, 00:17:17.071 "auth": { 00:17:17.071 "state": "completed", 00:17:17.071 "digest": "sha512", 00:17:17.071 "dhgroup": "ffdhe8192" 00:17:17.071 } 00:17:17.071 } 00:17:17.071 ]' 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:17.071 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.330 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.330 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.330 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.330 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.330 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.589 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:17.589 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:NWFkYzRjYmJkNmMyNzg2OWUyM2ViNWZiZmFmZWIxZmUzZTYxYWFhYTZmYTdjZDYwIIZ88Q==: --dhchap-ctrl-secret DHHC-1:03:Mzc2MjMzMWRhNzIyN2QyY2Y3MjE0NzRiYzExODFlOTNmMWM3MjEzN2EwNmE1MTA3MTlkMjhjZjRhNzNkMDIwNAO7LK0=: 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.158 14:58:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:18.158 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:18.417 request: 00:17:18.417 { 00:17:18.417 "name": "nvme0", 00:17:18.417 "trtype": "tcp", 00:17:18.417 "traddr": "10.0.0.2", 00:17:18.417 "adrfam": "ipv4", 00:17:18.417 "trsvcid": "4420", 00:17:18.417 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:18.417 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.417 "prchk_reftag": false, 00:17:18.417 "prchk_guard": false, 00:17:18.417 "hdgst": false, 00:17:18.417 "ddgst": false, 00:17:18.417 "dhchap_key": "key2", 00:17:18.417 "allow_unrecognized_csi": false, 00:17:18.417 "method": "bdev_nvme_attach_controller", 00:17:18.417 "req_id": 1 00:17:18.417 } 00:17:18.417 Got JSON-RPC error response 00:17:18.417 response: 00:17:18.417 { 00:17:18.417 "code": -5, 00:17:18.417 "message": "Input/output error" 00:17:18.417 } 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.676 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.677 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:18.936 request: 00:17:18.936 { 00:17:18.936 "name": "nvme0", 00:17:18.936 "trtype": "tcp", 00:17:18.936 "traddr": "10.0.0.2", 00:17:18.936 "adrfam": "ipv4", 00:17:18.936 "trsvcid": "4420", 00:17:18.936 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:18.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:18.936 "prchk_reftag": false, 00:17:18.936 "prchk_guard": false, 00:17:18.936 "hdgst": false, 00:17:18.936 "ddgst": false, 00:17:18.936 "dhchap_key": "key1", 00:17:18.936 "dhchap_ctrlr_key": "ckey2", 00:17:18.936 "allow_unrecognized_csi": false, 00:17:18.936 "method": "bdev_nvme_attach_controller", 00:17:18.936 "req_id": 1 00:17:18.936 } 00:17:18.936 Got JSON-RPC error response 00:17:18.936 response: 00:17:18.936 { 00:17:18.936 "code": -5, 00:17:18.936 "message": "Input/output error" 00:17:18.936 } 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.936 14:58:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.504 request: 00:17:19.504 { 00:17:19.504 "name": "nvme0", 00:17:19.504 "trtype": "tcp", 00:17:19.504 "traddr": "10.0.0.2", 00:17:19.504 "adrfam": "ipv4", 00:17:19.504 "trsvcid": "4420", 00:17:19.504 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:19.504 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.504 "prchk_reftag": false, 00:17:19.504 "prchk_guard": false, 00:17:19.504 "hdgst": false, 00:17:19.504 "ddgst": false, 00:17:19.504 "dhchap_key": "key1", 00:17:19.504 "dhchap_ctrlr_key": "ckey1", 00:17:19.504 "allow_unrecognized_csi": false, 00:17:19.504 "method": "bdev_nvme_attach_controller", 00:17:19.504 "req_id": 1 00:17:19.504 } 00:17:19.504 Got JSON-RPC error response 00:17:19.504 response: 00:17:19.504 { 00:17:19.504 "code": -5, 00:17:19.504 "message": "Input/output error" 00:17:19.504 } 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3091340 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3091340 ']' 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3091340 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091340 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091340' 00:17:19.504 killing process with pid 3091340 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3091340 00:17:19.504 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3091340 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3113602 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3113602 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3113602 ']' 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.764 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3113602 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3113602 ']' 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.023 14:58:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.282 null0 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uQv 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3Ww ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3Ww 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.K4c 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BRj ]] 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BRj 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.282 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Jiu 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kwF ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kwF 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.1xO 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.283 14:58:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.219 nvme0n1 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.219 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.478 { 00:17:21.478 "cntlid": 1, 00:17:21.478 "qid": 0, 00:17:21.478 "state": "enabled", 00:17:21.478 "thread": "nvmf_tgt_poll_group_000", 00:17:21.478 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.478 "listen_address": { 00:17:21.478 "trtype": "TCP", 00:17:21.478 "adrfam": "IPv4", 00:17:21.478 "traddr": "10.0.0.2", 00:17:21.478 "trsvcid": "4420" 00:17:21.478 }, 00:17:21.478 "peer_address": { 00:17:21.478 "trtype": "TCP", 00:17:21.478 "adrfam": "IPv4", 00:17:21.478 "traddr": "10.0.0.1", 00:17:21.478 "trsvcid": "53036" 00:17:21.478 }, 00:17:21.478 "auth": { 00:17:21.478 "state": "completed", 00:17:21.478 "digest": "sha512", 00:17:21.478 "dhgroup": "ffdhe8192" 00:17:21.478 } 00:17:21.478 } 00:17:21.478 ]' 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.478 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.736 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:21.736 14:58:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:22.303 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.562 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.562 request: 00:17:22.562 { 00:17:22.562 "name": "nvme0", 00:17:22.562 "trtype": "tcp", 00:17:22.562 "traddr": "10.0.0.2", 00:17:22.562 "adrfam": "ipv4", 00:17:22.562 "trsvcid": "4420", 00:17:22.562 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:22.562 "prchk_reftag": false, 00:17:22.562 "prchk_guard": false, 00:17:22.562 "hdgst": false, 00:17:22.562 "ddgst": false, 00:17:22.562 "dhchap_key": "key3", 00:17:22.562 "allow_unrecognized_csi": false, 00:17:22.562 "method": "bdev_nvme_attach_controller", 00:17:22.562 "req_id": 1 00:17:22.562 } 00:17:22.562 Got JSON-RPC error response 00:17:22.562 response: 00:17:22.562 { 00:17:22.562 "code": -5, 00:17:22.562 "message": "Input/output error" 00:17:22.562 } 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.820 14:58:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.079 request: 00:17:23.079 { 00:17:23.079 "name": "nvme0", 00:17:23.079 "trtype": "tcp", 00:17:23.079 "traddr": "10.0.0.2", 00:17:23.079 "adrfam": "ipv4", 00:17:23.079 "trsvcid": "4420", 00:17:23.079 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.079 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.079 "prchk_reftag": false, 00:17:23.079 "prchk_guard": false, 00:17:23.079 "hdgst": false, 00:17:23.079 "ddgst": false, 00:17:23.079 "dhchap_key": "key3", 00:17:23.079 "allow_unrecognized_csi": false, 00:17:23.079 "method": "bdev_nvme_attach_controller", 00:17:23.079 "req_id": 1 00:17:23.079 } 00:17:23.079 Got JSON-RPC error response 00:17:23.079 response: 00:17:23.079 { 00:17:23.079 "code": -5, 00:17:23.079 "message": "Input/output error" 00:17:23.079 } 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.079 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:23.337 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.338 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:23.596 request: 00:17:23.596 { 00:17:23.596 "name": "nvme0", 00:17:23.596 "trtype": "tcp", 00:17:23.596 "traddr": "10.0.0.2", 00:17:23.596 "adrfam": "ipv4", 00:17:23.596 "trsvcid": "4420", 00:17:23.596 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:23.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.596 "prchk_reftag": false, 00:17:23.596 "prchk_guard": false, 00:17:23.596 "hdgst": false, 00:17:23.596 "ddgst": false, 00:17:23.596 "dhchap_key": "key0", 00:17:23.596 "dhchap_ctrlr_key": "key1", 00:17:23.596 "allow_unrecognized_csi": false, 00:17:23.596 "method": "bdev_nvme_attach_controller", 00:17:23.596 "req_id": 1 00:17:23.596 } 00:17:23.596 Got JSON-RPC error response 00:17:23.596 response: 00:17:23.596 { 00:17:23.596 "code": -5, 00:17:23.596 "message": "Input/output error" 00:17:23.596 } 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:23.596 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:23.855 nvme0n1 00:17:23.855 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:23.855 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.855 14:58:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:24.114 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.114 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.114 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:24.373 14:58:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:25.310 nvme0n1 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:25.310 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.569 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.569 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:25.569 14:58:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: --dhchap-ctrl-secret DHHC-1:03:M2EwNWE1YjgyMzU5MjA5NDJmNTEzZDYyZDg3YTUzNjhmMjRiMTFmZTQ3OWUzNDdkMTI3MTg3MDg3NmQyMTU3NbfYlEg=: 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.137 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:26.396 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:26.655 request: 00:17:26.655 { 00:17:26.655 "name": "nvme0", 00:17:26.655 "trtype": "tcp", 00:17:26.655 "traddr": "10.0.0.2", 00:17:26.655 "adrfam": "ipv4", 00:17:26.655 "trsvcid": "4420", 00:17:26.655 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:26.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.655 "prchk_reftag": false, 00:17:26.655 "prchk_guard": false, 00:17:26.655 "hdgst": false, 00:17:26.655 "ddgst": false, 00:17:26.655 "dhchap_key": "key1", 00:17:26.655 "allow_unrecognized_csi": false, 00:17:26.655 "method": "bdev_nvme_attach_controller", 00:17:26.655 "req_id": 1 00:17:26.655 } 00:17:26.655 Got JSON-RPC error response 00:17:26.655 response: 00:17:26.655 { 00:17:26.655 "code": -5, 00:17:26.655 "message": "Input/output error" 00:17:26.655 } 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:26.655 14:58:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:27.591 nvme0n1 00:17:27.591 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:27.591 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:27.591 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:27.850 14:58:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:28.109 nvme0n1 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.368 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: '' 2s 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: ]] 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NjM3NWI5NTdkNTE5ODZjMWVkOWU3NjM0MWNlNWRiYmNgBLlj: 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:28.627 14:58:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: 2s 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: ]] 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTkxNWE2NjBlZWY1MGQ2MjM4MTI4NTNiNDdjZDgxZGJmMGEyNDY0YmEzM2E1ZDQw/ZLiFA==: 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:31.160 14:58:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.063 14:58:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:33.630 nvme0n1 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:33.631 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.198 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:34.198 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:34.198 14:58:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:34.198 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:34.457 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:34.457 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:34.457 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:34.716 14:58:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:34.975 request: 00:17:34.975 { 00:17:34.975 "name": "nvme0", 00:17:34.975 "dhchap_key": "key1", 00:17:34.975 "dhchap_ctrlr_key": "key3", 00:17:34.975 "method": "bdev_nvme_set_keys", 00:17:34.975 "req_id": 1 00:17:34.975 } 00:17:34.975 Got JSON-RPC error response 00:17:34.975 response: 00:17:34.975 { 00:17:34.975 "code": -13, 00:17:34.975 "message": "Permission denied" 00:17:34.975 } 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:35.234 14:58:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:36.609 14:58:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:37.175 nvme0n1 00:17:37.175 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:37.175 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.175 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:37.433 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:37.691 request: 00:17:37.691 { 00:17:37.691 "name": "nvme0", 00:17:37.691 "dhchap_key": "key2", 00:17:37.691 "dhchap_ctrlr_key": "key0", 00:17:37.691 "method": "bdev_nvme_set_keys", 00:17:37.691 "req_id": 1 00:17:37.691 } 00:17:37.691 Got JSON-RPC error response 00:17:37.691 response: 00:17:37.691 { 00:17:37.691 "code": -13, 00:17:37.691 "message": "Permission denied" 00:17:37.691 } 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:37.691 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.949 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:37.949 14:58:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:38.884 14:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:38.884 14:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:38.884 14:58:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3091380 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3091380 ']' 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3091380 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3091380 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3091380' 00:17:39.143 killing process with pid 3091380 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3091380 00:17:39.143 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3091380 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.401 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.659 rmmod nvme_tcp 00:17:39.659 rmmod nvme_fabrics 00:17:39.659 rmmod nvme_keyring 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3113602 ']' 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3113602 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3113602 ']' 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3113602 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3113602 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3113602' 00:17:39.659 killing process with pid 3113602 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3113602 00:17:39.659 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3113602 00:17:39.918 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.918 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.918 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.918 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.919 14:58:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.uQv /tmp/spdk.key-sha256.K4c /tmp/spdk.key-sha384.Jiu /tmp/spdk.key-sha512.1xO /tmp/spdk.key-sha512.3Ww /tmp/spdk.key-sha384.BRj /tmp/spdk.key-sha256.kwF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf-auth.log 00:17:41.823 00:17:41.823 real 2m33.770s 00:17:41.823 user 5m54.705s 00:17:41.823 sys 0m24.337s 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.823 ************************************ 00:17:41.823 END TEST nvmf_auth_target 00:17:41.823 ************************************ 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.823 14:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:42.083 ************************************ 00:17:42.083 START TEST nvmf_bdevio_no_huge 00:17:42.083 ************************************ 00:17:42.083 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:42.083 * Looking for test storage... 00:17:42.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:17:42.083 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:42.083 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:42.083 14:58:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.083 --rc genhtml_branch_coverage=1 00:17:42.083 --rc genhtml_function_coverage=1 00:17:42.083 --rc genhtml_legend=1 00:17:42.083 --rc geninfo_all_blocks=1 00:17:42.083 --rc geninfo_unexecuted_blocks=1 00:17:42.083 00:17:42.083 ' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.083 --rc genhtml_branch_coverage=1 00:17:42.083 --rc genhtml_function_coverage=1 00:17:42.083 --rc genhtml_legend=1 00:17:42.083 --rc geninfo_all_blocks=1 00:17:42.083 --rc geninfo_unexecuted_blocks=1 00:17:42.083 00:17:42.083 ' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.083 --rc genhtml_branch_coverage=1 00:17:42.083 --rc genhtml_function_coverage=1 00:17:42.083 --rc genhtml_legend=1 00:17:42.083 --rc geninfo_all_blocks=1 00:17:42.083 --rc geninfo_unexecuted_blocks=1 00:17:42.083 00:17:42.083 ' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:42.083 --rc genhtml_branch_coverage=1 00:17:42.083 --rc genhtml_function_coverage=1 00:17:42.083 --rc genhtml_legend=1 00:17:42.083 --rc geninfo_all_blocks=1 00:17:42.083 --rc geninfo_unexecuted_blocks=1 00:17:42.083 00:17:42.083 ' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.083 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:42.084 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:42.084 14:58:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:48.652 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:48.652 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:48.652 Found net devices under 0000:86:00.0: cvl_0_0 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:48.652 Found net devices under 0000:86:00.1: cvl_0_1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.652 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:17:48.652 00:17:48.652 --- 10.0.0.2 ping statistics --- 00:17:48.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.653 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:17:48.653 00:17:48.653 --- 10.0.0.1 ping statistics --- 00:17:48.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.653 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.653 14:58:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3120486 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3120486 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3120486 ']' 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.653 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.653 [2024-12-11 14:58:41.056042] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:48.653 [2024-12-11 14:58:41.056089] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:48.653 [2024-12-11 14:58:41.141920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.653 [2024-12-11 14:58:41.189288] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.653 [2024-12-11 14:58:41.189326] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.653 [2024-12-11 14:58:41.189333] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.653 [2024-12-11 14:58:41.189340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.653 [2024-12-11 14:58:41.189346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.653 [2024-12-11 14:58:41.190593] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:17:48.653 [2024-12-11 14:58:41.190704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:17:48.653 [2024-12-11 14:58:41.190808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.653 [2024-12-11 14:58:41.190810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.912 [2024-12-11 14:58:41.938307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.912 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:48.912 Malloc0 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:49.170 [2024-12-11 14:58:41.982623] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.170 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:49.171 { 00:17:49.171 "params": { 00:17:49.171 "name": "Nvme$subsystem", 00:17:49.171 "trtype": "$TEST_TRANSPORT", 00:17:49.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:49.171 "adrfam": "ipv4", 00:17:49.171 "trsvcid": "$NVMF_PORT", 00:17:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:49.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:49.171 "hdgst": ${hdgst:-false}, 00:17:49.171 "ddgst": ${ddgst:-false} 00:17:49.171 }, 00:17:49.171 "method": "bdev_nvme_attach_controller" 00:17:49.171 } 00:17:49.171 EOF 00:17:49.171 )") 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:49.171 14:58:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:49.171 "params": { 00:17:49.171 "name": "Nvme1", 00:17:49.171 "trtype": "tcp", 00:17:49.171 "traddr": "10.0.0.2", 00:17:49.171 "adrfam": "ipv4", 00:17:49.171 "trsvcid": "4420", 00:17:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.171 "hdgst": false, 00:17:49.171 "ddgst": false 00:17:49.171 }, 00:17:49.171 "method": "bdev_nvme_attach_controller" 00:17:49.171 }' 00:17:49.171 [2024-12-11 14:58:42.032413] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:49.171 [2024-12-11 14:58:42.032458] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3120732 ] 00:17:49.171 [2024-12-11 14:58:42.113365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:49.171 [2024-12-11 14:58:42.162561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.171 [2024-12-11 14:58:42.162675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.171 [2024-12-11 14:58:42.162676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.429 I/O targets: 00:17:49.429 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:49.429 00:17:49.429 00:17:49.429 CUnit - A unit testing framework for C - Version 2.1-3 00:17:49.429 http://cunit.sourceforge.net/ 00:17:49.429 00:17:49.429 00:17:49.429 Suite: bdevio tests on: Nvme1n1 00:17:49.429 Test: blockdev write read block ...passed 00:17:49.429 Test: blockdev write zeroes read block ...passed 00:17:49.429 Test: blockdev write zeroes read no split ...passed 00:17:49.688 Test: blockdev write zeroes read split ...passed 00:17:49.688 Test: blockdev write zeroes read split partial ...passed 00:17:49.688 Test: blockdev reset ...[2024-12-11 14:58:42.494402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:49.688 [2024-12-11 14:58:42.494466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x592540 (9): Bad file descriptor 00:17:49.688 [2024-12-11 14:58:42.510519] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:49.688 passed 00:17:49.688 Test: blockdev write read 8 blocks ...passed 00:17:49.688 Test: blockdev write read size > 128k ...passed 00:17:49.688 Test: blockdev write read invalid size ...passed 00:17:49.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:49.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:49.688 Test: blockdev write read max offset ...passed 00:17:49.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:49.688 Test: blockdev writev readv 8 blocks ...passed 00:17:49.688 Test: blockdev writev readv 30 x 1block ...passed 00:17:49.688 Test: blockdev writev readv block ...passed 00:17:49.688 Test: blockdev writev readv size > 128k ...passed 00:17:49.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:49.688 Test: blockdev comparev and writev ...[2024-12-11 14:58:42.679856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.679886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.679900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.679908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:49.688 [2024-12-11 14:58:42.680727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:49.688 [2024-12-11 14:58:42.680734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:49.688 passed 00:17:49.946 Test: blockdev nvme passthru rw ...passed 00:17:49.946 Test: blockdev nvme passthru vendor specific ...[2024-12-11 14:58:42.762438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.946 [2024-12-11 14:58:42.762457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:49.946 [2024-12-11 14:58:42.762565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.946 [2024-12-11 14:58:42.762576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:49.946 [2024-12-11 14:58:42.762681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.946 [2024-12-11 14:58:42.762691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:49.946 [2024-12-11 14:58:42.762790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:49.946 [2024-12-11 14:58:42.762800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:49.946 passed 00:17:49.946 Test: blockdev nvme admin passthru ...passed 00:17:49.946 Test: blockdev copy ...passed 00:17:49.946 00:17:49.946 Run Summary: Type Total Ran Passed Failed Inactive 00:17:49.946 suites 1 1 n/a 0 0 00:17:49.946 tests 23 23 23 0 0 00:17:49.946 asserts 152 152 152 0 n/a 00:17:49.946 00:17:49.946 Elapsed time = 0.895 seconds 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:50.205 rmmod nvme_tcp 00:17:50.205 rmmod nvme_fabrics 00:17:50.205 rmmod nvme_keyring 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3120486 ']' 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3120486 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3120486 ']' 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3120486 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3120486 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3120486' 00:17:50.205 killing process with pid 3120486 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3120486 00:17:50.205 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3120486 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.464 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.723 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:50.723 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.723 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.723 14:58:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.747 00:17:52.747 real 0m10.699s 00:17:52.747 user 0m12.738s 00:17:52.747 sys 0m5.342s 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:52.747 ************************************ 00:17:52.747 END TEST nvmf_bdevio_no_huge 00:17:52.747 ************************************ 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.747 ************************************ 00:17:52.747 START TEST nvmf_tls 00:17:52.747 ************************************ 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:52.747 * Looking for test storage... 00:17:52.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.747 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.007 --rc genhtml_branch_coverage=1 00:17:53.007 --rc genhtml_function_coverage=1 00:17:53.007 --rc genhtml_legend=1 00:17:53.007 --rc geninfo_all_blocks=1 00:17:53.007 --rc geninfo_unexecuted_blocks=1 00:17:53.007 00:17:53.007 ' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.007 --rc genhtml_branch_coverage=1 00:17:53.007 --rc genhtml_function_coverage=1 00:17:53.007 --rc genhtml_legend=1 00:17:53.007 --rc geninfo_all_blocks=1 00:17:53.007 --rc geninfo_unexecuted_blocks=1 00:17:53.007 00:17:53.007 ' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.007 --rc genhtml_branch_coverage=1 00:17:53.007 --rc genhtml_function_coverage=1 00:17:53.007 --rc genhtml_legend=1 00:17:53.007 --rc geninfo_all_blocks=1 00:17:53.007 --rc geninfo_unexecuted_blocks=1 00:17:53.007 00:17:53.007 ' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:53.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.007 --rc genhtml_branch_coverage=1 00:17:53.007 --rc genhtml_function_coverage=1 00:17:53.007 --rc genhtml_legend=1 00:17:53.007 --rc geninfo_all_blocks=1 00:17:53.007 --rc geninfo_unexecuted_blocks=1 00:17:53.007 00:17:53.007 ' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.007 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.008 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.008 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.008 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.008 14:58:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:59.581 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:59.581 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:59.581 Found net devices under 0000:86:00.0: cvl_0_0 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:59.581 Found net devices under 0000:86:00.1: cvl_0_1 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.581 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.582 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.582 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:17:59.582 00:17:59.582 --- 10.0.0.2 ping statistics --- 00:17:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.582 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.582 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.582 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:17:59.582 00:17:59.582 --- 10.0.0.1 ping statistics --- 00:17:59.582 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.582 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3124500 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3124500 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3124500 ']' 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.582 14:58:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.582 [2024-12-11 14:58:51.878324] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:17:59.582 [2024-12-11 14:58:51.878366] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.582 [2024-12-11 14:58:51.958036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.582 [2024-12-11 14:58:51.997799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.582 [2024-12-11 14:58:51.997835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.582 [2024-12-11 14:58:51.997843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.582 [2024-12-11 14:58:51.997849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.582 [2024-12-11 14:58:51.997854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.582 [2024-12-11 14:58:51.998406] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:59.582 true 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:59.582 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:59.841 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:59.841 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:59.841 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:59.841 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:59.841 14:58:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:00.100 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:00.100 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.358 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:00.358 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:00.358 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.358 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:00.616 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:00.616 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:00.616 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:00.616 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:00.616 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:00.875 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:00.875 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:00.875 14:58:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:01.134 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:01.134 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.DDnr37sBiT 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.GDsoqErvnd 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.DDnr37sBiT 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.GDsoqErvnd 00:18:01.393 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:01.652 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_start_init 00:18:01.911 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.DDnr37sBiT 00:18:01.911 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.DDnr37sBiT 00:18:01.911 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.911 [2024-12-11 14:58:54.926271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.911 14:58:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.170 14:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:02.428 [2024-12-11 14:58:55.291193] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.428 [2024-12-11 14:58:55.291435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.428 14:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.687 malloc0 00:18:02.687 14:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.687 14:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.DDnr37sBiT 00:18:02.946 14:58:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:03.204 14:58:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.DDnr37sBiT 00:18:13.180 Initializing NVMe Controllers 00:18:13.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:13.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:13.180 Initialization complete. Launching workers. 00:18:13.180 ======================================================== 00:18:13.180 Latency(us) 00:18:13.180 Device Information : IOPS MiB/s Average min max 00:18:13.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16565.67 64.71 3863.48 808.35 4458.13 00:18:13.180 ======================================================== 00:18:13.180 Total : 16565.67 64.71 3863.48 808.35 4458.13 00:18:13.180 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.DDnr37sBiT 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DDnr37sBiT 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3126850 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3126850 /var/tmp/bdevperf.sock 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3126850 ']' 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.180 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.180 [2024-12-11 14:59:06.202065] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:13.180 [2024-12-11 14:59:06.202113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3126850 ] 00:18:13.439 [2024-12-11 14:59:06.276449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.439 [2024-12-11 14:59:06.315751] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.439 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.439 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.439 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DDnr37sBiT 00:18:13.697 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:13.956 [2024-12-11 14:59:06.792021] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.956 TLSTESTn1 00:18:13.956 14:59:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:13.956 Running I/O for 10 seconds... 00:18:16.269 5285.00 IOPS, 20.64 MiB/s [2024-12-11T13:59:10.260Z] 5312.50 IOPS, 20.75 MiB/s [2024-12-11T13:59:11.197Z] 5411.33 IOPS, 21.14 MiB/s [2024-12-11T13:59:12.134Z] 5403.00 IOPS, 21.11 MiB/s [2024-12-11T13:59:13.070Z] 5415.60 IOPS, 21.15 MiB/s [2024-12-11T13:59:14.007Z] 5434.00 IOPS, 21.23 MiB/s [2024-12-11T13:59:15.384Z] 5435.43 IOPS, 21.23 MiB/s [2024-12-11T13:59:16.321Z] 5410.75 IOPS, 21.14 MiB/s [2024-12-11T13:59:17.258Z] 5357.44 IOPS, 20.93 MiB/s [2024-12-11T13:59:17.258Z] 5341.30 IOPS, 20.86 MiB/s 00:18:24.210 Latency(us) 00:18:24.210 [2024-12-11T13:59:17.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.210 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:24.210 Verification LBA range: start 0x0 length 0x2000 00:18:24.210 TLSTESTn1 : 10.02 5345.10 20.88 0.00 0.00 23910.75 5983.72 31685.23 00:18:24.210 [2024-12-11T13:59:17.258Z] =================================================================================================================== 00:18:24.210 [2024-12-11T13:59:17.258Z] Total : 5345.10 20.88 0.00 0.00 23910.75 5983.72 31685.23 00:18:24.210 { 00:18:24.210 "results": [ 00:18:24.210 { 00:18:24.210 "job": "TLSTESTn1", 00:18:24.210 "core_mask": "0x4", 00:18:24.210 "workload": "verify", 00:18:24.210 "status": "finished", 00:18:24.210 "verify_range": { 00:18:24.210 "start": 0, 00:18:24.210 "length": 8192 00:18:24.210 }, 00:18:24.210 "queue_depth": 128, 00:18:24.210 "io_size": 4096, 00:18:24.210 "runtime": 10.01665, 00:18:24.210 "iops": 5345.100407820978, 00:18:24.210 "mibps": 20.879298468050695, 00:18:24.210 "io_failed": 0, 00:18:24.210 "io_timeout": 0, 00:18:24.210 "avg_latency_us": 23910.74642802618, 00:18:24.210 "min_latency_us": 5983.721739130435, 00:18:24.210 "max_latency_us": 31685.231304347824 00:18:24.210 } 00:18:24.210 ], 00:18:24.210 "core_count": 1 00:18:24.210 } 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3126850 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3126850 ']' 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3126850 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3126850 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3126850' 00:18:24.210 killing process with pid 3126850 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3126850 00:18:24.210 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.210 00:18:24.210 Latency(us) 00:18:24.210 [2024-12-11T13:59:17.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.210 [2024-12-11T13:59:17.258Z] =================================================================================================================== 00:18:24.210 [2024-12-11T13:59:17.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3126850 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDsoqErvnd 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDsoqErvnd 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GDsoqErvnd 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GDsoqErvnd 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3128683 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3128683 /var/tmp/bdevperf.sock 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3128683 ']' 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.210 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.469 [2024-12-11 14:59:17.293323] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:24.469 [2024-12-11 14:59:17.293371] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128683 ] 00:18:24.469 [2024-12-11 14:59:17.359623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.469 [2024-12-11 14:59:17.396700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.469 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.469 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.469 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GDsoqErvnd 00:18:24.736 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:24.998 [2024-12-11 14:59:17.860484] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.998 [2024-12-11 14:59:17.865237] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:24.998 [2024-12-11 14:59:17.865864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x794e20 (107): Transport endpoint is not connected 00:18:24.998 [2024-12-11 14:59:17.866858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x794e20 (9): Bad file descriptor 00:18:24.998 [2024-12-11 14:59:17.867858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:24.998 [2024-12-11 14:59:17.867869] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:24.998 [2024-12-11 14:59:17.867879] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:24.998 [2024-12-11 14:59:17.867887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:24.998 request: 00:18:24.998 { 00:18:24.998 "name": "TLSTEST", 00:18:24.998 "trtype": "tcp", 00:18:24.998 "traddr": "10.0.0.2", 00:18:24.998 "adrfam": "ipv4", 00:18:24.998 "trsvcid": "4420", 00:18:24.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.998 "prchk_reftag": false, 00:18:24.998 "prchk_guard": false, 00:18:24.998 "hdgst": false, 00:18:24.998 "ddgst": false, 00:18:24.998 "psk": "key0", 00:18:24.998 "allow_unrecognized_csi": false, 00:18:24.998 "method": "bdev_nvme_attach_controller", 00:18:24.998 "req_id": 1 00:18:24.998 } 00:18:24.998 Got JSON-RPC error response 00:18:24.998 response: 00:18:24.998 { 00:18:24.998 "code": -5, 00:18:24.998 "message": "Input/output error" 00:18:24.998 } 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3128683 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3128683 ']' 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3128683 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128683 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128683' 00:18:24.998 killing process with pid 3128683 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3128683 00:18:24.998 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.998 00:18:24.998 Latency(us) 00:18:24.998 [2024-12-11T13:59:18.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.998 [2024-12-11T13:59:18.046Z] =================================================================================================================== 00:18:24.998 [2024-12-11T13:59:18.046Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.998 14:59:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3128683 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DDnr37sBiT 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DDnr37sBiT 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.DDnr37sBiT 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DDnr37sBiT 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3128726 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3128726 /var/tmp/bdevperf.sock 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3128726 ']' 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.257 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.257 [2024-12-11 14:59:18.152588] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:25.257 [2024-12-11 14:59:18.152638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128726 ] 00:18:25.257 [2024-12-11 14:59:18.231118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.257 [2024-12-11 14:59:18.270205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.515 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.515 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:25.515 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DDnr37sBiT 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:25.774 [2024-12-11 14:59:18.745871] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:25.774 [2024-12-11 14:59:18.757024] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:25.774 [2024-12-11 14:59:18.757048] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:25.774 [2024-12-11 14:59:18.757072] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:25.774 [2024-12-11 14:59:18.757265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1be20 (107): Transport endpoint is not connected 00:18:25.774 [2024-12-11 14:59:18.758258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d1be20 (9): Bad file descriptor 00:18:25.774 [2024-12-11 14:59:18.759260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:25.774 [2024-12-11 14:59:18.759270] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:25.774 [2024-12-11 14:59:18.759277] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:25.774 [2024-12-11 14:59:18.759285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:25.774 request: 00:18:25.774 { 00:18:25.774 "name": "TLSTEST", 00:18:25.774 "trtype": "tcp", 00:18:25.774 "traddr": "10.0.0.2", 00:18:25.774 "adrfam": "ipv4", 00:18:25.774 "trsvcid": "4420", 00:18:25.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.774 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:25.774 "prchk_reftag": false, 00:18:25.774 "prchk_guard": false, 00:18:25.774 "hdgst": false, 00:18:25.774 "ddgst": false, 00:18:25.774 "psk": "key0", 00:18:25.774 "allow_unrecognized_csi": false, 00:18:25.774 "method": "bdev_nvme_attach_controller", 00:18:25.774 "req_id": 1 00:18:25.774 } 00:18:25.774 Got JSON-RPC error response 00:18:25.774 response: 00:18:25.774 { 00:18:25.774 "code": -5, 00:18:25.774 "message": "Input/output error" 00:18:25.774 } 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3128726 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3128726 ']' 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3128726 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.774 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128726 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128726' 00:18:26.033 killing process with pid 3128726 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3128726 00:18:26.033 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.033 00:18:26.033 Latency(us) 00:18:26.033 [2024-12-11T13:59:19.081Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.033 [2024-12-11T13:59:19.081Z] =================================================================================================================== 00:18:26.033 [2024-12-11T13:59:19.081Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3128726 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DDnr37sBiT 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DDnr37sBiT 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.DDnr37sBiT 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.DDnr37sBiT 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3128934 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3128934 /var/tmp/bdevperf.sock 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3128934 ']' 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:26.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:26.033 14:59:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.033 [2024-12-11 14:59:19.036879] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:26.033 [2024-12-11 14:59:19.036927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3128934 ] 00:18:26.291 [2024-12-11 14:59:19.111444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.291 [2024-12-11 14:59:19.148514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.291 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.291 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:26.291 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.DDnr37sBiT 00:18:26.550 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:26.809 [2024-12-11 14:59:19.620660] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.809 [2024-12-11 14:59:19.629262] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:26.809 [2024-12-11 14:59:19.629285] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:26.809 [2024-12-11 14:59:19.629308] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:26.809 [2024-12-11 14:59:19.629987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bee20 (107): Transport endpoint is not connected 00:18:26.809 [2024-12-11 14:59:19.630980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15bee20 (9): Bad file descriptor 00:18:26.809 [2024-12-11 14:59:19.631982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:26.809 [2024-12-11 14:59:19.631991] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:26.809 [2024-12-11 14:59:19.631998] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:26.809 [2024-12-11 14:59:19.632006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:26.809 request: 00:18:26.809 { 00:18:26.809 "name": "TLSTEST", 00:18:26.809 "trtype": "tcp", 00:18:26.809 "traddr": "10.0.0.2", 00:18:26.809 "adrfam": "ipv4", 00:18:26.809 "trsvcid": "4420", 00:18:26.809 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:26.809 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:26.809 "prchk_reftag": false, 00:18:26.809 "prchk_guard": false, 00:18:26.809 "hdgst": false, 00:18:26.809 "ddgst": false, 00:18:26.809 "psk": "key0", 00:18:26.809 "allow_unrecognized_csi": false, 00:18:26.809 "method": "bdev_nvme_attach_controller", 00:18:26.809 "req_id": 1 00:18:26.809 } 00:18:26.809 Got JSON-RPC error response 00:18:26.809 response: 00:18:26.809 { 00:18:26.809 "code": -5, 00:18:26.809 "message": "Input/output error" 00:18:26.809 } 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3128934 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3128934 ']' 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3128934 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128934 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128934' 00:18:26.809 killing process with pid 3128934 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3128934 00:18:26.809 Received shutdown signal, test time was about 10.000000 seconds 00:18:26.809 00:18:26.809 Latency(us) 00:18:26.809 [2024-12-11T13:59:19.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.809 [2024-12-11T13:59:19.857Z] =================================================================================================================== 00:18:26.809 [2024-12-11T13:59:19.857Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:26.809 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3128934 00:18:27.067 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.067 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.067 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3129168 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3129168 /var/tmp/bdevperf.sock 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3129168 ']' 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:27.068 14:59:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.068 [2024-12-11 14:59:19.912698] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:27.068 [2024-12-11 14:59:19.912751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129168 ] 00:18:27.068 [2024-12-11 14:59:19.988416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.068 [2024-12-11 14:59:20.029142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.326 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:27.326 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:27.326 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:27.326 [2024-12-11 14:59:20.300811] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:27.326 [2024-12-11 14:59:20.300843] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:27.326 request: 00:18:27.326 { 00:18:27.326 "name": "key0", 00:18:27.326 "path": "", 00:18:27.326 "method": "keyring_file_add_key", 00:18:27.326 "req_id": 1 00:18:27.326 } 00:18:27.326 Got JSON-RPC error response 00:18:27.327 response: 00:18:27.327 { 00:18:27.327 "code": -1, 00:18:27.327 "message": "Operation not permitted" 00:18:27.327 } 00:18:27.327 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.585 [2024-12-11 14:59:20.493398] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.585 [2024-12-11 14:59:20.493431] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:27.585 request: 00:18:27.585 { 00:18:27.585 "name": "TLSTEST", 00:18:27.585 "trtype": "tcp", 00:18:27.585 "traddr": "10.0.0.2", 00:18:27.585 "adrfam": "ipv4", 00:18:27.585 "trsvcid": "4420", 00:18:27.585 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.585 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.585 "prchk_reftag": false, 00:18:27.585 "prchk_guard": false, 00:18:27.586 "hdgst": false, 00:18:27.586 "ddgst": false, 00:18:27.586 "psk": "key0", 00:18:27.586 "allow_unrecognized_csi": false, 00:18:27.586 "method": "bdev_nvme_attach_controller", 00:18:27.586 "req_id": 1 00:18:27.586 } 00:18:27.586 Got JSON-RPC error response 00:18:27.586 response: 00:18:27.586 { 00:18:27.586 "code": -126, 00:18:27.586 "message": "Required key not available" 00:18:27.586 } 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3129168 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3129168 ']' 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3129168 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129168 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129168' 00:18:27.586 killing process with pid 3129168 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3129168 00:18:27.586 Received shutdown signal, test time was about 10.000000 seconds 00:18:27.586 00:18:27.586 Latency(us) 00:18:27.586 [2024-12-11T13:59:20.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.586 [2024-12-11T13:59:20.634Z] =================================================================================================================== 00:18:27.586 [2024-12-11T13:59:20.634Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:27.586 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3129168 00:18:27.845 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:27.845 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:27.845 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3124500 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3124500 ']' 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3124500 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124500 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124500' 00:18:27.846 killing process with pid 3124500 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3124500 00:18:27.846 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3124500 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:28.105 14:59:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.54Dc5b2lHL 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.54Dc5b2lHL 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3129279 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3129279 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3129279 ']' 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.105 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.105 [2024-12-11 14:59:21.055523] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:28.105 [2024-12-11 14:59:21.055570] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.105 [2024-12-11 14:59:21.129931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.364 [2024-12-11 14:59:21.168934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.364 [2024-12-11 14:59:21.168965] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.364 [2024-12-11 14:59:21.168972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.364 [2024-12-11 14:59:21.168979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.364 [2024-12-11 14:59:21.168984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.364 [2024-12-11 14:59:21.169532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.54Dc5b2lHL 00:18:28.364 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:28.623 [2024-12-11 14:59:21.478153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.623 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:28.883 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:28.883 [2024-12-11 14:59:21.867145] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.883 [2024-12-11 14:59:21.867377] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.883 14:59:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:29.142 malloc0 00:18:29.142 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:29.401 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.54Dc5b2lHL 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.54Dc5b2lHL 00:18:29.659 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3129668 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3129668 /var/tmp/bdevperf.sock 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3129668 ']' 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.918 [2024-12-11 14:59:22.751973] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:29.918 [2024-12-11 14:59:22.752021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3129668 ] 00:18:29.918 [2024-12-11 14:59:22.824542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.918 [2024-12-11 14:59:22.864332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.918 14:59:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:30.177 14:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.436 [2024-12-11 14:59:23.328638] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.436 TLSTESTn1 00:18:30.436 14:59:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:30.694 Running I/O for 10 seconds... 00:18:32.566 5278.00 IOPS, 20.62 MiB/s [2024-12-11T13:59:26.551Z] 5379.50 IOPS, 21.01 MiB/s [2024-12-11T13:59:27.927Z] 5366.00 IOPS, 20.96 MiB/s [2024-12-11T13:59:28.861Z] 5409.50 IOPS, 21.13 MiB/s [2024-12-11T13:59:29.797Z] 5409.60 IOPS, 21.13 MiB/s [2024-12-11T13:59:30.733Z] 5324.17 IOPS, 20.80 MiB/s [2024-12-11T13:59:31.667Z] 5266.29 IOPS, 20.57 MiB/s [2024-12-11T13:59:32.602Z] 5247.25 IOPS, 20.50 MiB/s [2024-12-11T13:59:33.978Z] 5215.22 IOPS, 20.37 MiB/s [2024-12-11T13:59:33.978Z] 5203.90 IOPS, 20.33 MiB/s 00:18:40.930 Latency(us) 00:18:40.930 [2024-12-11T13:59:33.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.930 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.930 Verification LBA range: start 0x0 length 0x2000 00:18:40.930 TLSTESTn1 : 10.02 5207.81 20.34 0.00 0.00 24541.81 6382.64 50377.24 00:18:40.930 [2024-12-11T13:59:33.978Z] =================================================================================================================== 00:18:40.930 [2024-12-11T13:59:33.978Z] Total : 5207.81 20.34 0.00 0.00 24541.81 6382.64 50377.24 00:18:40.930 { 00:18:40.930 "results": [ 00:18:40.930 { 00:18:40.930 "job": "TLSTESTn1", 00:18:40.930 "core_mask": "0x4", 00:18:40.930 "workload": "verify", 00:18:40.930 "status": "finished", 00:18:40.930 "verify_range": { 00:18:40.930 "start": 0, 00:18:40.930 "length": 8192 00:18:40.930 }, 00:18:40.930 "queue_depth": 128, 00:18:40.930 "io_size": 4096, 00:18:40.930 "runtime": 10.016882, 00:18:40.930 "iops": 5207.808178233507, 00:18:40.930 "mibps": 20.343000696224635, 00:18:40.930 "io_failed": 0, 00:18:40.930 "io_timeout": 0, 00:18:40.930 "avg_latency_us": 24541.813695077086, 00:18:40.930 "min_latency_us": 6382.63652173913, 00:18:40.930 "max_latency_us": 50377.23826086956 00:18:40.930 } 00:18:40.930 ], 00:18:40.930 "core_count": 1 00:18:40.930 } 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3129668 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3129668 ']' 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3129668 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129668 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129668' 00:18:40.930 killing process with pid 3129668 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3129668 00:18:40.930 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.930 00:18:40.930 Latency(us) 00:18:40.930 [2024-12-11T13:59:33.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.930 [2024-12-11T13:59:33.978Z] =================================================================================================================== 00:18:40.930 [2024-12-11T13:59:33.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3129668 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.54Dc5b2lHL 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.54Dc5b2lHL 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.54Dc5b2lHL 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.54Dc5b2lHL 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.54Dc5b2lHL 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3131374 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3131374 /var/tmp/bdevperf.sock 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3131374 ']' 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:40.930 14:59:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.930 [2024-12-11 14:59:33.845112] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:40.930 [2024-12-11 14:59:33.845167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3131374 ] 00:18:40.930 [2024-12-11 14:59:33.923063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.930 [2024-12-11 14:59:33.961524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:41.189 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:41.189 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:41.189 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:41.189 [2024-12-11 14:59:34.221249] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.54Dc5b2lHL': 0100666 00:18:41.189 [2024-12-11 14:59:34.221282] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:41.189 request: 00:18:41.189 { 00:18:41.189 "name": "key0", 00:18:41.189 "path": "/tmp/tmp.54Dc5b2lHL", 00:18:41.189 "method": "keyring_file_add_key", 00:18:41.189 "req_id": 1 00:18:41.189 } 00:18:41.189 Got JSON-RPC error response 00:18:41.189 response: 00:18:41.189 { 00:18:41.189 "code": -1, 00:18:41.189 "message": "Operation not permitted" 00:18:41.189 } 00:18:41.447 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:41.448 [2024-12-11 14:59:34.417855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:41.448 [2024-12-11 14:59:34.417899] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:41.448 request: 00:18:41.448 { 00:18:41.448 "name": "TLSTEST", 00:18:41.448 "trtype": "tcp", 00:18:41.448 "traddr": "10.0.0.2", 00:18:41.448 "adrfam": "ipv4", 00:18:41.448 "trsvcid": "4420", 00:18:41.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:41.448 "prchk_reftag": false, 00:18:41.448 "prchk_guard": false, 00:18:41.448 "hdgst": false, 00:18:41.448 "ddgst": false, 00:18:41.448 "psk": "key0", 00:18:41.448 "allow_unrecognized_csi": false, 00:18:41.448 "method": "bdev_nvme_attach_controller", 00:18:41.448 "req_id": 1 00:18:41.448 } 00:18:41.448 Got JSON-RPC error response 00:18:41.448 response: 00:18:41.448 { 00:18:41.448 "code": -126, 00:18:41.448 "message": "Required key not available" 00:18:41.448 } 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3131374 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3131374 ']' 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3131374 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.448 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131374 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131374' 00:18:41.706 killing process with pid 3131374 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3131374 00:18:41.706 Received shutdown signal, test time was about 10.000000 seconds 00:18:41.706 00:18:41.706 Latency(us) 00:18:41.706 [2024-12-11T13:59:34.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.706 [2024-12-11T13:59:34.754Z] =================================================================================================================== 00:18:41.706 [2024-12-11T13:59:34.754Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3131374 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3129279 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3129279 ']' 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3129279 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129279 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129279' 00:18:41.706 killing process with pid 3129279 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3129279 00:18:41.706 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3129279 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3131534 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3131534 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3131534 ']' 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.965 14:59:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.965 [2024-12-11 14:59:34.926882] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:41.965 [2024-12-11 14:59:34.926930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.965 [2024-12-11 14:59:35.005595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.223 [2024-12-11 14:59:35.046031] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.223 [2024-12-11 14:59:35.046069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.223 [2024-12-11 14:59:35.046077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.223 [2024-12-11 14:59:35.046083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.223 [2024-12-11 14:59:35.046089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.223 [2024-12-11 14:59:35.046654] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.54Dc5b2lHL 00:18:42.223 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:42.481 [2024-12-11 14:59:35.354616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.481 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:42.739 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:42.739 [2024-12-11 14:59:35.735606] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:42.739 [2024-12-11 14:59:35.735832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.739 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:42.998 malloc0 00:18:42.998 14:59:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:43.256 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:43.515 [2024-12-11 14:59:36.305140] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.54Dc5b2lHL': 0100666 00:18:43.515 [2024-12-11 14:59:36.305171] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:43.515 request: 00:18:43.515 { 00:18:43.515 "name": "key0", 00:18:43.515 "path": "/tmp/tmp.54Dc5b2lHL", 00:18:43.515 "method": "keyring_file_add_key", 00:18:43.515 "req_id": 1 00:18:43.515 } 00:18:43.515 Got JSON-RPC error response 00:18:43.515 response: 00:18:43.515 { 00:18:43.515 "code": -1, 00:18:43.515 "message": "Operation not permitted" 00:18:43.515 } 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:43.515 [2024-12-11 14:59:36.497660] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:43.515 [2024-12-11 14:59:36.497697] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:43.515 request: 00:18:43.515 { 00:18:43.515 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.515 "host": "nqn.2016-06.io.spdk:host1", 00:18:43.515 "psk": "key0", 00:18:43.515 "method": "nvmf_subsystem_add_host", 00:18:43.515 "req_id": 1 00:18:43.515 } 00:18:43.515 Got JSON-RPC error response 00:18:43.515 response: 00:18:43.515 { 00:18:43.515 "code": -32603, 00:18:43.515 "message": "Internal error" 00:18:43.515 } 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3131534 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3131534 ']' 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3131534 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.515 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131534 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131534' 00:18:43.774 killing process with pid 3131534 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3131534 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3131534 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.54Dc5b2lHL 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3131951 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3131951 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3131951 ']' 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:43.774 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.774 [2024-12-11 14:59:36.783606] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:43.774 [2024-12-11 14:59:36.783655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.033 [2024-12-11 14:59:36.845483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.033 [2024-12-11 14:59:36.885984] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.033 [2024-12-11 14:59:36.886016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.033 [2024-12-11 14:59:36.886023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.033 [2024-12-11 14:59:36.886029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.033 [2024-12-11 14:59:36.886035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.033 [2024-12-11 14:59:36.886595] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.033 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:44.033 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:44.033 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:44.033 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:44.033 14:59:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.033 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.033 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:44.033 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.54Dc5b2lHL 00:18:44.033 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:44.291 [2024-12-11 14:59:37.182907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.291 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:44.550 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:44.550 [2024-12-11 14:59:37.575922] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.550 [2024-12-11 14:59:37.576135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.808 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:44.808 malloc0 00:18:44.808 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:45.067 14:59:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:45.325 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3132268 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3132268 /var/tmp/bdevperf.sock 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3132268 ']' 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:45.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:45.584 [2024-12-11 14:59:38.410225] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:45.584 [2024-12-11 14:59:38.410277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132268 ] 00:18:45.584 [2024-12-11 14:59:38.486062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.584 [2024-12-11 14:59:38.525777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:45.584 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:18:45.842 14:59:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:46.101 [2024-12-11 14:59:38.973299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:46.101 TLSTESTn1 00:18:46.101 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py save_config 00:18:46.360 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:46.360 "subsystems": [ 00:18:46.360 { 00:18:46.360 "subsystem": "keyring", 00:18:46.360 "config": [ 00:18:46.360 { 00:18:46.360 "method": "keyring_file_add_key", 00:18:46.360 "params": { 00:18:46.360 "name": "key0", 00:18:46.360 "path": "/tmp/tmp.54Dc5b2lHL" 00:18:46.360 } 00:18:46.360 } 00:18:46.360 ] 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "subsystem": "iobuf", 00:18:46.360 "config": [ 00:18:46.360 { 00:18:46.360 "method": "iobuf_set_options", 00:18:46.360 "params": { 00:18:46.360 "small_pool_count": 8192, 00:18:46.360 "large_pool_count": 1024, 00:18:46.360 "small_bufsize": 8192, 00:18:46.360 "large_bufsize": 135168, 00:18:46.360 "enable_numa": false 00:18:46.360 } 00:18:46.360 } 00:18:46.360 ] 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "subsystem": "sock", 00:18:46.360 "config": [ 00:18:46.360 { 00:18:46.360 "method": "sock_set_default_impl", 00:18:46.360 "params": { 00:18:46.360 "impl_name": "posix" 00:18:46.360 } 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "method": "sock_impl_set_options", 00:18:46.360 "params": { 00:18:46.360 "impl_name": "ssl", 00:18:46.360 "recv_buf_size": 4096, 00:18:46.360 "send_buf_size": 4096, 00:18:46.360 "enable_recv_pipe": true, 00:18:46.360 "enable_quickack": false, 00:18:46.360 "enable_placement_id": 0, 00:18:46.360 "enable_zerocopy_send_server": true, 00:18:46.360 "enable_zerocopy_send_client": false, 00:18:46.360 "zerocopy_threshold": 0, 00:18:46.360 "tls_version": 0, 00:18:46.360 "enable_ktls": false 00:18:46.360 } 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "method": "sock_impl_set_options", 00:18:46.360 "params": { 00:18:46.360 "impl_name": "posix", 00:18:46.360 "recv_buf_size": 2097152, 00:18:46.360 "send_buf_size": 2097152, 00:18:46.360 "enable_recv_pipe": true, 00:18:46.360 "enable_quickack": false, 00:18:46.360 "enable_placement_id": 0, 00:18:46.360 "enable_zerocopy_send_server": true, 00:18:46.360 "enable_zerocopy_send_client": false, 00:18:46.360 "zerocopy_threshold": 0, 00:18:46.360 "tls_version": 0, 00:18:46.360 "enable_ktls": false 00:18:46.360 } 00:18:46.360 } 00:18:46.360 ] 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "subsystem": "vmd", 00:18:46.360 "config": [] 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "subsystem": "accel", 00:18:46.360 "config": [ 00:18:46.360 { 00:18:46.360 "method": "accel_set_options", 00:18:46.360 "params": { 00:18:46.360 "small_cache_size": 128, 00:18:46.360 "large_cache_size": 16, 00:18:46.360 "task_count": 2048, 00:18:46.360 "sequence_count": 2048, 00:18:46.360 "buf_count": 2048 00:18:46.360 } 00:18:46.360 } 00:18:46.360 ] 00:18:46.360 }, 00:18:46.360 { 00:18:46.360 "subsystem": "bdev", 00:18:46.361 "config": [ 00:18:46.361 { 00:18:46.361 "method": "bdev_set_options", 00:18:46.361 "params": { 00:18:46.361 "bdev_io_pool_size": 65535, 00:18:46.361 "bdev_io_cache_size": 256, 00:18:46.361 "bdev_auto_examine": true, 00:18:46.361 "iobuf_small_cache_size": 128, 00:18:46.361 "iobuf_large_cache_size": 16 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_raid_set_options", 00:18:46.361 "params": { 00:18:46.361 "process_window_size_kb": 1024, 00:18:46.361 "process_max_bandwidth_mb_sec": 0 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_iscsi_set_options", 00:18:46.361 "params": { 00:18:46.361 "timeout_sec": 30 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_nvme_set_options", 00:18:46.361 "params": { 00:18:46.361 "action_on_timeout": "none", 00:18:46.361 "timeout_us": 0, 00:18:46.361 "timeout_admin_us": 0, 00:18:46.361 "keep_alive_timeout_ms": 10000, 00:18:46.361 "arbitration_burst": 0, 00:18:46.361 "low_priority_weight": 0, 00:18:46.361 "medium_priority_weight": 0, 00:18:46.361 "high_priority_weight": 0, 00:18:46.361 "nvme_adminq_poll_period_us": 10000, 00:18:46.361 "nvme_ioq_poll_period_us": 0, 00:18:46.361 "io_queue_requests": 0, 00:18:46.361 "delay_cmd_submit": true, 00:18:46.361 "transport_retry_count": 4, 00:18:46.361 "bdev_retry_count": 3, 00:18:46.361 "transport_ack_timeout": 0, 00:18:46.361 "ctrlr_loss_timeout_sec": 0, 00:18:46.361 "reconnect_delay_sec": 0, 00:18:46.361 "fast_io_fail_timeout_sec": 0, 00:18:46.361 "disable_auto_failback": false, 00:18:46.361 "generate_uuids": false, 00:18:46.361 "transport_tos": 0, 00:18:46.361 "nvme_error_stat": false, 00:18:46.361 "rdma_srq_size": 0, 00:18:46.361 "io_path_stat": false, 00:18:46.361 "allow_accel_sequence": false, 00:18:46.361 "rdma_max_cq_size": 0, 00:18:46.361 "rdma_cm_event_timeout_ms": 0, 00:18:46.361 "dhchap_digests": [ 00:18:46.361 "sha256", 00:18:46.361 "sha384", 00:18:46.361 "sha512" 00:18:46.361 ], 00:18:46.361 "dhchap_dhgroups": [ 00:18:46.361 "null", 00:18:46.361 "ffdhe2048", 00:18:46.361 "ffdhe3072", 00:18:46.361 "ffdhe4096", 00:18:46.361 "ffdhe6144", 00:18:46.361 "ffdhe8192" 00:18:46.361 ], 00:18:46.361 "rdma_umr_per_io": false 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_nvme_set_hotplug", 00:18:46.361 "params": { 00:18:46.361 "period_us": 100000, 00:18:46.361 "enable": false 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_malloc_create", 00:18:46.361 "params": { 00:18:46.361 "name": "malloc0", 00:18:46.361 "num_blocks": 8192, 00:18:46.361 "block_size": 4096, 00:18:46.361 "physical_block_size": 4096, 00:18:46.361 "uuid": "24644beb-6143-4078-ad2f-4d606c4cbe9c", 00:18:46.361 "optimal_io_boundary": 0, 00:18:46.361 "md_size": 0, 00:18:46.361 "dif_type": 0, 00:18:46.361 "dif_is_head_of_md": false, 00:18:46.361 "dif_pi_format": 0 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "bdev_wait_for_examine" 00:18:46.361 } 00:18:46.361 ] 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "subsystem": "nbd", 00:18:46.361 "config": [] 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "subsystem": "scheduler", 00:18:46.361 "config": [ 00:18:46.361 { 00:18:46.361 "method": "framework_set_scheduler", 00:18:46.361 "params": { 00:18:46.361 "name": "static" 00:18:46.361 } 00:18:46.361 } 00:18:46.361 ] 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "subsystem": "nvmf", 00:18:46.361 "config": [ 00:18:46.361 { 00:18:46.361 "method": "nvmf_set_config", 00:18:46.361 "params": { 00:18:46.361 "discovery_filter": "match_any", 00:18:46.361 "admin_cmd_passthru": { 00:18:46.361 "identify_ctrlr": false 00:18:46.361 }, 00:18:46.361 "dhchap_digests": [ 00:18:46.361 "sha256", 00:18:46.361 "sha384", 00:18:46.361 "sha512" 00:18:46.361 ], 00:18:46.361 "dhchap_dhgroups": [ 00:18:46.361 "null", 00:18:46.361 "ffdhe2048", 00:18:46.361 "ffdhe3072", 00:18:46.361 "ffdhe4096", 00:18:46.361 "ffdhe6144", 00:18:46.361 "ffdhe8192" 00:18:46.361 ] 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_set_max_subsystems", 00:18:46.361 "params": { 00:18:46.361 "max_subsystems": 1024 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_set_crdt", 00:18:46.361 "params": { 00:18:46.361 "crdt1": 0, 00:18:46.361 "crdt2": 0, 00:18:46.361 "crdt3": 0 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_create_transport", 00:18:46.361 "params": { 00:18:46.361 "trtype": "TCP", 00:18:46.361 "max_queue_depth": 128, 00:18:46.361 "max_io_qpairs_per_ctrlr": 127, 00:18:46.361 "in_capsule_data_size": 4096, 00:18:46.361 "max_io_size": 131072, 00:18:46.361 "io_unit_size": 131072, 00:18:46.361 "max_aq_depth": 128, 00:18:46.361 "num_shared_buffers": 511, 00:18:46.361 "buf_cache_size": 4294967295, 00:18:46.361 "dif_insert_or_strip": false, 00:18:46.361 "zcopy": false, 00:18:46.361 "c2h_success": false, 00:18:46.361 "sock_priority": 0, 00:18:46.361 "abort_timeout_sec": 1, 00:18:46.361 "ack_timeout": 0, 00:18:46.361 "data_wr_pool_size": 0 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_create_subsystem", 00:18:46.361 "params": { 00:18:46.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.361 "allow_any_host": false, 00:18:46.361 "serial_number": "SPDK00000000000001", 00:18:46.361 "model_number": "SPDK bdev Controller", 00:18:46.361 "max_namespaces": 10, 00:18:46.361 "min_cntlid": 1, 00:18:46.361 "max_cntlid": 65519, 00:18:46.361 "ana_reporting": false 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_subsystem_add_host", 00:18:46.361 "params": { 00:18:46.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.361 "host": "nqn.2016-06.io.spdk:host1", 00:18:46.361 "psk": "key0" 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_subsystem_add_ns", 00:18:46.361 "params": { 00:18:46.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.361 "namespace": { 00:18:46.361 "nsid": 1, 00:18:46.361 "bdev_name": "malloc0", 00:18:46.361 "nguid": "24644BEB61434078AD2F4D606C4CBE9C", 00:18:46.361 "uuid": "24644beb-6143-4078-ad2f-4d606c4cbe9c", 00:18:46.361 "no_auto_visible": false 00:18:46.361 } 00:18:46.361 } 00:18:46.361 }, 00:18:46.361 { 00:18:46.361 "method": "nvmf_subsystem_add_listener", 00:18:46.361 "params": { 00:18:46.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.361 "listen_address": { 00:18:46.361 "trtype": "TCP", 00:18:46.361 "adrfam": "IPv4", 00:18:46.361 "traddr": "10.0.0.2", 00:18:46.361 "trsvcid": "4420" 00:18:46.361 }, 00:18:46.361 "secure_channel": true 00:18:46.361 } 00:18:46.361 } 00:18:46.361 ] 00:18:46.361 } 00:18:46.361 ] 00:18:46.361 }' 00:18:46.361 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:46.621 "subsystems": [ 00:18:46.621 { 00:18:46.621 "subsystem": "keyring", 00:18:46.621 "config": [ 00:18:46.621 { 00:18:46.621 "method": "keyring_file_add_key", 00:18:46.621 "params": { 00:18:46.621 "name": "key0", 00:18:46.621 "path": "/tmp/tmp.54Dc5b2lHL" 00:18:46.621 } 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "iobuf", 00:18:46.621 "config": [ 00:18:46.621 { 00:18:46.621 "method": "iobuf_set_options", 00:18:46.621 "params": { 00:18:46.621 "small_pool_count": 8192, 00:18:46.621 "large_pool_count": 1024, 00:18:46.621 "small_bufsize": 8192, 00:18:46.621 "large_bufsize": 135168, 00:18:46.621 "enable_numa": false 00:18:46.621 } 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "sock", 00:18:46.621 "config": [ 00:18:46.621 { 00:18:46.621 "method": "sock_set_default_impl", 00:18:46.621 "params": { 00:18:46.621 "impl_name": "posix" 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "sock_impl_set_options", 00:18:46.621 "params": { 00:18:46.621 "impl_name": "ssl", 00:18:46.621 "recv_buf_size": 4096, 00:18:46.621 "send_buf_size": 4096, 00:18:46.621 "enable_recv_pipe": true, 00:18:46.621 "enable_quickack": false, 00:18:46.621 "enable_placement_id": 0, 00:18:46.621 "enable_zerocopy_send_server": true, 00:18:46.621 "enable_zerocopy_send_client": false, 00:18:46.621 "zerocopy_threshold": 0, 00:18:46.621 "tls_version": 0, 00:18:46.621 "enable_ktls": false 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "sock_impl_set_options", 00:18:46.621 "params": { 00:18:46.621 "impl_name": "posix", 00:18:46.621 "recv_buf_size": 2097152, 00:18:46.621 "send_buf_size": 2097152, 00:18:46.621 "enable_recv_pipe": true, 00:18:46.621 "enable_quickack": false, 00:18:46.621 "enable_placement_id": 0, 00:18:46.621 "enable_zerocopy_send_server": true, 00:18:46.621 "enable_zerocopy_send_client": false, 00:18:46.621 "zerocopy_threshold": 0, 00:18:46.621 "tls_version": 0, 00:18:46.621 "enable_ktls": false 00:18:46.621 } 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "vmd", 00:18:46.621 "config": [] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "accel", 00:18:46.621 "config": [ 00:18:46.621 { 00:18:46.621 "method": "accel_set_options", 00:18:46.621 "params": { 00:18:46.621 "small_cache_size": 128, 00:18:46.621 "large_cache_size": 16, 00:18:46.621 "task_count": 2048, 00:18:46.621 "sequence_count": 2048, 00:18:46.621 "buf_count": 2048 00:18:46.621 } 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "bdev", 00:18:46.621 "config": [ 00:18:46.621 { 00:18:46.621 "method": "bdev_set_options", 00:18:46.621 "params": { 00:18:46.621 "bdev_io_pool_size": 65535, 00:18:46.621 "bdev_io_cache_size": 256, 00:18:46.621 "bdev_auto_examine": true, 00:18:46.621 "iobuf_small_cache_size": 128, 00:18:46.621 "iobuf_large_cache_size": 16 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_raid_set_options", 00:18:46.621 "params": { 00:18:46.621 "process_window_size_kb": 1024, 00:18:46.621 "process_max_bandwidth_mb_sec": 0 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_iscsi_set_options", 00:18:46.621 "params": { 00:18:46.621 "timeout_sec": 30 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_nvme_set_options", 00:18:46.621 "params": { 00:18:46.621 "action_on_timeout": "none", 00:18:46.621 "timeout_us": 0, 00:18:46.621 "timeout_admin_us": 0, 00:18:46.621 "keep_alive_timeout_ms": 10000, 00:18:46.621 "arbitration_burst": 0, 00:18:46.621 "low_priority_weight": 0, 00:18:46.621 "medium_priority_weight": 0, 00:18:46.621 "high_priority_weight": 0, 00:18:46.621 "nvme_adminq_poll_period_us": 10000, 00:18:46.621 "nvme_ioq_poll_period_us": 0, 00:18:46.621 "io_queue_requests": 512, 00:18:46.621 "delay_cmd_submit": true, 00:18:46.621 "transport_retry_count": 4, 00:18:46.621 "bdev_retry_count": 3, 00:18:46.621 "transport_ack_timeout": 0, 00:18:46.621 "ctrlr_loss_timeout_sec": 0, 00:18:46.621 "reconnect_delay_sec": 0, 00:18:46.621 "fast_io_fail_timeout_sec": 0, 00:18:46.621 "disable_auto_failback": false, 00:18:46.621 "generate_uuids": false, 00:18:46.621 "transport_tos": 0, 00:18:46.621 "nvme_error_stat": false, 00:18:46.621 "rdma_srq_size": 0, 00:18:46.621 "io_path_stat": false, 00:18:46.621 "allow_accel_sequence": false, 00:18:46.621 "rdma_max_cq_size": 0, 00:18:46.621 "rdma_cm_event_timeout_ms": 0, 00:18:46.621 "dhchap_digests": [ 00:18:46.621 "sha256", 00:18:46.621 "sha384", 00:18:46.621 "sha512" 00:18:46.621 ], 00:18:46.621 "dhchap_dhgroups": [ 00:18:46.621 "null", 00:18:46.621 "ffdhe2048", 00:18:46.621 "ffdhe3072", 00:18:46.621 "ffdhe4096", 00:18:46.621 "ffdhe6144", 00:18:46.621 "ffdhe8192" 00:18:46.621 ], 00:18:46.621 "rdma_umr_per_io": false 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_nvme_attach_controller", 00:18:46.621 "params": { 00:18:46.621 "name": "TLSTEST", 00:18:46.621 "trtype": "TCP", 00:18:46.621 "adrfam": "IPv4", 00:18:46.621 "traddr": "10.0.0.2", 00:18:46.621 "trsvcid": "4420", 00:18:46.621 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.621 "prchk_reftag": false, 00:18:46.621 "prchk_guard": false, 00:18:46.621 "ctrlr_loss_timeout_sec": 0, 00:18:46.621 "reconnect_delay_sec": 0, 00:18:46.621 "fast_io_fail_timeout_sec": 0, 00:18:46.621 "psk": "key0", 00:18:46.621 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.621 "hdgst": false, 00:18:46.621 "ddgst": false, 00:18:46.621 "multipath": "multipath" 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_nvme_set_hotplug", 00:18:46.621 "params": { 00:18:46.621 "period_us": 100000, 00:18:46.621 "enable": false 00:18:46.621 } 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "method": "bdev_wait_for_examine" 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }, 00:18:46.621 { 00:18:46.621 "subsystem": "nbd", 00:18:46.621 "config": [] 00:18:46.621 } 00:18:46.621 ] 00:18:46.621 }' 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3132268 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3132268 ']' 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3132268 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132268 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132268' 00:18:46.621 killing process with pid 3132268 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3132268 00:18:46.621 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.621 00:18:46.621 Latency(us) 00:18:46.621 [2024-12-11T13:59:39.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.621 [2024-12-11T13:59:39.669Z] =================================================================================================================== 00:18:46.621 [2024-12-11T13:59:39.669Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.621 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3132268 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3131951 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3131951 ']' 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3131951 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3131951 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3131951' 00:18:46.916 killing process with pid 3131951 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3131951 00:18:46.916 14:59:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3131951 00:18:47.279 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:47.279 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:47.279 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:47.279 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:47.279 "subsystems": [ 00:18:47.279 { 00:18:47.279 "subsystem": "keyring", 00:18:47.279 "config": [ 00:18:47.279 { 00:18:47.279 "method": "keyring_file_add_key", 00:18:47.279 "params": { 00:18:47.279 "name": "key0", 00:18:47.279 "path": "/tmp/tmp.54Dc5b2lHL" 00:18:47.279 } 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "subsystem": "iobuf", 00:18:47.279 "config": [ 00:18:47.279 { 00:18:47.279 "method": "iobuf_set_options", 00:18:47.279 "params": { 00:18:47.279 "small_pool_count": 8192, 00:18:47.279 "large_pool_count": 1024, 00:18:47.279 "small_bufsize": 8192, 00:18:47.279 "large_bufsize": 135168, 00:18:47.279 "enable_numa": false 00:18:47.279 } 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "subsystem": "sock", 00:18:47.279 "config": [ 00:18:47.279 { 00:18:47.279 "method": "sock_set_default_impl", 00:18:47.279 "params": { 00:18:47.279 "impl_name": "posix" 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "method": "sock_impl_set_options", 00:18:47.279 "params": { 00:18:47.279 "impl_name": "ssl", 00:18:47.279 "recv_buf_size": 4096, 00:18:47.279 "send_buf_size": 4096, 00:18:47.279 "enable_recv_pipe": true, 00:18:47.279 "enable_quickack": false, 00:18:47.279 "enable_placement_id": 0, 00:18:47.279 "enable_zerocopy_send_server": true, 00:18:47.279 "enable_zerocopy_send_client": false, 00:18:47.279 "zerocopy_threshold": 0, 00:18:47.279 "tls_version": 0, 00:18:47.279 "enable_ktls": false 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "method": "sock_impl_set_options", 00:18:47.279 "params": { 00:18:47.279 "impl_name": "posix", 00:18:47.279 "recv_buf_size": 2097152, 00:18:47.279 "send_buf_size": 2097152, 00:18:47.279 "enable_recv_pipe": true, 00:18:47.279 "enable_quickack": false, 00:18:47.279 "enable_placement_id": 0, 00:18:47.279 "enable_zerocopy_send_server": true, 00:18:47.279 "enable_zerocopy_send_client": false, 00:18:47.279 "zerocopy_threshold": 0, 00:18:47.279 "tls_version": 0, 00:18:47.279 "enable_ktls": false 00:18:47.279 } 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "subsystem": "vmd", 00:18:47.279 "config": [] 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "subsystem": "accel", 00:18:47.279 "config": [ 00:18:47.279 { 00:18:47.279 "method": "accel_set_options", 00:18:47.279 "params": { 00:18:47.279 "small_cache_size": 128, 00:18:47.279 "large_cache_size": 16, 00:18:47.279 "task_count": 2048, 00:18:47.279 "sequence_count": 2048, 00:18:47.279 "buf_count": 2048 00:18:47.279 } 00:18:47.279 } 00:18:47.279 ] 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "subsystem": "bdev", 00:18:47.279 "config": [ 00:18:47.279 { 00:18:47.279 "method": "bdev_set_options", 00:18:47.279 "params": { 00:18:47.279 "bdev_io_pool_size": 65535, 00:18:47.279 "bdev_io_cache_size": 256, 00:18:47.279 "bdev_auto_examine": true, 00:18:47.279 "iobuf_small_cache_size": 128, 00:18:47.279 "iobuf_large_cache_size": 16 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "method": "bdev_raid_set_options", 00:18:47.279 "params": { 00:18:47.279 "process_window_size_kb": 1024, 00:18:47.279 "process_max_bandwidth_mb_sec": 0 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "method": "bdev_iscsi_set_options", 00:18:47.279 "params": { 00:18:47.279 "timeout_sec": 30 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.279 "method": "bdev_nvme_set_options", 00:18:47.279 "params": { 00:18:47.279 "action_on_timeout": "none", 00:18:47.279 "timeout_us": 0, 00:18:47.279 "timeout_admin_us": 0, 00:18:47.279 "keep_alive_timeout_ms": 10000, 00:18:47.279 "arbitration_burst": 0, 00:18:47.279 "low_priority_weight": 0, 00:18:47.279 "medium_priority_weight": 0, 00:18:47.279 "high_priority_weight": 0, 00:18:47.279 "nvme_adminq_poll_period_us": 10000, 00:18:47.279 "nvme_ioq_poll_period_us": 0, 00:18:47.279 "io_queue_requests": 0, 00:18:47.279 "delay_cmd_submit": true, 00:18:47.279 "transport_retry_count": 4, 00:18:47.279 "bdev_retry_count": 3, 00:18:47.279 "transport_ack_timeout": 0, 00:18:47.279 "ctrlr_loss_timeout_sec": 0, 00:18:47.279 "reconnect_delay_sec": 0, 00:18:47.279 "fast_io_fail_timeout_sec": 0, 00:18:47.279 "disable_auto_failback": false, 00:18:47.279 "generate_uuids": false, 00:18:47.279 "transport_tos": 0, 00:18:47.279 "nvme_error_stat": false, 00:18:47.279 "rdma_srq_size": 0, 00:18:47.279 "io_path_stat": false, 00:18:47.279 "allow_accel_sequence": false, 00:18:47.279 "rdma_max_cq_size": 0, 00:18:47.279 "rdma_cm_event_timeout_ms": 0, 00:18:47.279 "dhchap_digests": [ 00:18:47.279 "sha256", 00:18:47.279 "sha384", 00:18:47.279 "sha512" 00:18:47.279 ], 00:18:47.279 "dhchap_dhgroups": [ 00:18:47.279 "null", 00:18:47.279 "ffdhe2048", 00:18:47.279 "ffdhe3072", 00:18:47.279 "ffdhe4096", 00:18:47.279 "ffdhe6144", 00:18:47.279 "ffdhe8192" 00:18:47.279 ], 00:18:47.279 "rdma_umr_per_io": false 00:18:47.279 } 00:18:47.279 }, 00:18:47.279 { 00:18:47.280 "method": "bdev_nvme_set_hotplug", 00:18:47.280 "params": { 00:18:47.280 "period_us": 100000, 00:18:47.280 "enable": false 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "bdev_malloc_create", 00:18:47.280 "params": { 00:18:47.280 "name": "malloc0", 00:18:47.280 "num_blocks": 8192, 00:18:47.280 "block_size": 4096, 00:18:47.280 "physical_block_size": 4096, 00:18:47.280 "uuid": "24644beb-6143-4078-ad2f-4d606c4cbe9c", 00:18:47.280 "optimal_io_boundary": 0, 00:18:47.280 "md_size": 0, 00:18:47.280 "dif_type": 0, 00:18:47.280 "dif_is_head_of_md": false, 00:18:47.280 "dif_pi_format": 0 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "bdev_wait_for_examine" 00:18:47.280 } 00:18:47.280 ] 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "subsystem": "nbd", 00:18:47.280 "config": [] 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "subsystem": "scheduler", 00:18:47.280 "config": [ 00:18:47.280 { 00:18:47.280 "method": "framework_set_scheduler", 00:18:47.280 "params": { 00:18:47.280 "name": "static" 00:18:47.280 } 00:18:47.280 } 00:18:47.280 ] 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "subsystem": "nvmf", 00:18:47.280 "config": [ 00:18:47.280 { 00:18:47.280 "method": "nvmf_set_config", 00:18:47.280 "params": { 00:18:47.280 "discovery_filter": "match_any", 00:18:47.280 "admin_cmd_passthru": { 00:18:47.280 "identify_ctrlr": false 00:18:47.280 }, 00:18:47.280 "dhchap_digests": [ 00:18:47.280 "sha256", 00:18:47.280 "sha384", 00:18:47.280 "sha512" 00:18:47.280 ], 00:18:47.280 "dhchap_dhgroups": [ 00:18:47.280 "null", 00:18:47.280 "ffdhe2048", 00:18:47.280 "ffdhe3072", 00:18:47.280 "ffdhe4096", 00:18:47.280 "ffdhe6144", 00:18:47.280 "ffdhe8192" 00:18:47.280 ] 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_set_max_subsystems", 00:18:47.280 "params": { 00:18:47.280 "max_subsystems": 1024 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_set_crdt", 00:18:47.280 "params": { 00:18:47.280 "crdt1": 0, 00:18:47.280 "crdt2": 0, 00:18:47.280 "crdt3": 0 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_create_transport", 00:18:47.280 "params": { 00:18:47.280 "trtype": "TCP", 00:18:47.280 "max_queue_depth": 128, 00:18:47.280 "max_io_qpairs_per_ctrlr": 127, 00:18:47.280 "in_capsule_data_size": 4096, 00:18:47.280 "max_io_size": 131072, 00:18:47.280 "io_unit_size": 131072, 00:18:47.280 "max_aq_depth": 128, 00:18:47.280 "num_shared_buffers": 511, 00:18:47.280 "buf_cache_size": 4294967295, 00:18:47.280 "dif_insert_or_strip": false, 00:18:47.280 "zcopy": false, 00:18:47.280 "c2h_success": false, 00:18:47.280 "sock_priority": 0, 00:18:47.280 "abort_timeout_sec": 1, 00:18:47.280 "ack_timeout": 0, 00:18:47.280 "data_wr_pool_size": 0 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_create_subsystem", 00:18:47.280 "params": { 00:18:47.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.280 "allow_any_host": false, 00:18:47.280 "serial_number": "SPDK00000000000001", 00:18:47.280 "model_number": "SPDK bdev Controller", 00:18:47.280 "max_namespaces": 10, 00:18:47.280 "min_cntlid": 1, 00:18:47.280 "max_cntlid": 65519, 00:18:47.280 "ana_reporting": false 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_subsystem_add_host", 00:18:47.280 "params": { 00:18:47.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.280 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.280 "psk": "key0" 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_subsystem_add_ns", 00:18:47.280 "params": { 00:18:47.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.280 "namespace": { 00:18:47.280 "nsid": 1, 00:18:47.280 "bdev_name": "malloc0", 00:18:47.280 "nguid": "24644BEB61434078AD2F4D606C4CBE9C", 00:18:47.280 "uuid": "24644beb-6143-4078-ad2f-4d606c4cbe9c", 00:18:47.280 "no_auto_visible": false 00:18:47.280 } 00:18:47.280 } 00:18:47.280 }, 00:18:47.280 { 00:18:47.280 "method": "nvmf_subsystem_add_listener", 00:18:47.280 "params": { 00:18:47.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.280 "listen_address": { 00:18:47.280 "trtype": "TCP", 00:18:47.280 "adrfam": "IPv4", 00:18:47.280 "traddr": "10.0.0.2", 00:18:47.280 "trsvcid": "4420" 00:18:47.280 }, 00:18:47.280 "secure_channel": true 00:18:47.280 } 00:18:47.280 } 00:18:47.280 ] 00:18:47.280 } 00:18:47.280 ] 00:18:47.280 }' 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3132523 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3132523 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3132523 ']' 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:47.280 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.280 [2024-12-11 14:59:40.089873] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:47.280 [2024-12-11 14:59:40.089924] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.280 [2024-12-11 14:59:40.170439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.280 [2024-12-11 14:59:40.208798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.280 [2024-12-11 14:59:40.208835] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.280 [2024-12-11 14:59:40.208843] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.280 [2024-12-11 14:59:40.208850] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.280 [2024-12-11 14:59:40.208855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.280 [2024-12-11 14:59:40.209456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.544 [2024-12-11 14:59:40.422690] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.544 [2024-12-11 14:59:40.454715] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.544 [2024-12-11 14:59:40.454947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3132704 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3132704 /var/tmp/bdevperf.sock 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3132704 ']' 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.112 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:48.112 "subsystems": [ 00:18:48.112 { 00:18:48.112 "subsystem": "keyring", 00:18:48.112 "config": [ 00:18:48.112 { 00:18:48.112 "method": "keyring_file_add_key", 00:18:48.112 "params": { 00:18:48.112 "name": "key0", 00:18:48.112 "path": "/tmp/tmp.54Dc5b2lHL" 00:18:48.112 } 00:18:48.112 } 00:18:48.112 ] 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "subsystem": "iobuf", 00:18:48.112 "config": [ 00:18:48.112 { 00:18:48.112 "method": "iobuf_set_options", 00:18:48.112 "params": { 00:18:48.112 "small_pool_count": 8192, 00:18:48.112 "large_pool_count": 1024, 00:18:48.112 "small_bufsize": 8192, 00:18:48.112 "large_bufsize": 135168, 00:18:48.112 "enable_numa": false 00:18:48.112 } 00:18:48.112 } 00:18:48.112 ] 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "subsystem": "sock", 00:18:48.112 "config": [ 00:18:48.112 { 00:18:48.112 "method": "sock_set_default_impl", 00:18:48.112 "params": { 00:18:48.112 "impl_name": "posix" 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "sock_impl_set_options", 00:18:48.112 "params": { 00:18:48.112 "impl_name": "ssl", 00:18:48.112 "recv_buf_size": 4096, 00:18:48.112 "send_buf_size": 4096, 00:18:48.112 "enable_recv_pipe": true, 00:18:48.112 "enable_quickack": false, 00:18:48.112 "enable_placement_id": 0, 00:18:48.112 "enable_zerocopy_send_server": true, 00:18:48.112 "enable_zerocopy_send_client": false, 00:18:48.112 "zerocopy_threshold": 0, 00:18:48.112 "tls_version": 0, 00:18:48.112 "enable_ktls": false 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "sock_impl_set_options", 00:18:48.112 "params": { 00:18:48.112 "impl_name": "posix", 00:18:48.112 "recv_buf_size": 2097152, 00:18:48.112 "send_buf_size": 2097152, 00:18:48.112 "enable_recv_pipe": true, 00:18:48.112 "enable_quickack": false, 00:18:48.112 "enable_placement_id": 0, 00:18:48.112 "enable_zerocopy_send_server": true, 00:18:48.112 "enable_zerocopy_send_client": false, 00:18:48.112 "zerocopy_threshold": 0, 00:18:48.112 "tls_version": 0, 00:18:48.112 "enable_ktls": false 00:18:48.112 } 00:18:48.112 } 00:18:48.112 ] 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "subsystem": "vmd", 00:18:48.112 "config": [] 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "subsystem": "accel", 00:18:48.112 "config": [ 00:18:48.112 { 00:18:48.112 "method": "accel_set_options", 00:18:48.112 "params": { 00:18:48.112 "small_cache_size": 128, 00:18:48.112 "large_cache_size": 16, 00:18:48.112 "task_count": 2048, 00:18:48.112 "sequence_count": 2048, 00:18:48.112 "buf_count": 2048 00:18:48.112 } 00:18:48.112 } 00:18:48.112 ] 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "subsystem": "bdev", 00:18:48.112 "config": [ 00:18:48.112 { 00:18:48.112 "method": "bdev_set_options", 00:18:48.112 "params": { 00:18:48.112 "bdev_io_pool_size": 65535, 00:18:48.112 "bdev_io_cache_size": 256, 00:18:48.112 "bdev_auto_examine": true, 00:18:48.112 "iobuf_small_cache_size": 128, 00:18:48.112 "iobuf_large_cache_size": 16 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "bdev_raid_set_options", 00:18:48.112 "params": { 00:18:48.112 "process_window_size_kb": 1024, 00:18:48.112 "process_max_bandwidth_mb_sec": 0 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "bdev_iscsi_set_options", 00:18:48.112 "params": { 00:18:48.112 "timeout_sec": 30 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "bdev_nvme_set_options", 00:18:48.112 "params": { 00:18:48.112 "action_on_timeout": "none", 00:18:48.112 "timeout_us": 0, 00:18:48.112 "timeout_admin_us": 0, 00:18:48.112 "keep_alive_timeout_ms": 10000, 00:18:48.112 "arbitration_burst": 0, 00:18:48.112 "low_priority_weight": 0, 00:18:48.112 "medium_priority_weight": 0, 00:18:48.112 "high_priority_weight": 0, 00:18:48.112 "nvme_adminq_poll_period_us": 10000, 00:18:48.112 "nvme_ioq_poll_period_us": 0, 00:18:48.112 "io_queue_requests": 512, 00:18:48.112 "delay_cmd_submit": true, 00:18:48.112 "transport_retry_count": 4, 00:18:48.112 "bdev_retry_count": 3, 00:18:48.112 "transport_ack_timeout": 0, 00:18:48.112 "ctrlr_loss_timeout_sec": 0, 00:18:48.112 "reconnect_delay_sec": 0, 00:18:48.112 "fast_io_fail_timeout_sec": 0, 00:18:48.112 "disable_auto_failback": false, 00:18:48.112 "generate_uuids": false, 00:18:48.112 "transport_tos": 0, 00:18:48.112 "nvme_error_stat": false, 00:18:48.112 "rdma_srq_size": 0, 00:18:48.112 "io_path_stat": false, 00:18:48.112 "allow_accel_sequence": false, 00:18:48.112 "rdma_max_cq_size": 0, 00:18:48.112 "rdma_cm_event_timeout_ms": 0, 00:18:48.112 "dhchap_digests": [ 00:18:48.112 "sha256", 00:18:48.112 "sha384", 00:18:48.112 "sha512" 00:18:48.112 ], 00:18:48.112 "dhchap_dhgroups": [ 00:18:48.112 "null", 00:18:48.112 "ffdhe2048", 00:18:48.112 "ffdhe3072", 00:18:48.112 "ffdhe4096", 00:18:48.112 "ffdhe6144", 00:18:48.112 "ffdhe8192" 00:18:48.112 ], 00:18:48.112 "rdma_umr_per_io": false 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "bdev_nvme_attach_controller", 00:18:48.112 "params": { 00:18:48.112 "name": "TLSTEST", 00:18:48.112 "trtype": "TCP", 00:18:48.112 "adrfam": "IPv4", 00:18:48.112 "traddr": "10.0.0.2", 00:18:48.112 "trsvcid": "4420", 00:18:48.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.112 "prchk_reftag": false, 00:18:48.112 "prchk_guard": false, 00:18:48.112 "ctrlr_loss_timeout_sec": 0, 00:18:48.112 "reconnect_delay_sec": 0, 00:18:48.112 "fast_io_fail_timeout_sec": 0, 00:18:48.112 "psk": "key0", 00:18:48.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.112 "hdgst": false, 00:18:48.112 "ddgst": false, 00:18:48.112 "multipath": "multipath" 00:18:48.112 } 00:18:48.112 }, 00:18:48.112 { 00:18:48.112 "method": "bdev_nvme_set_hotplug", 00:18:48.112 "params": { 00:18:48.112 "period_us": 100000, 00:18:48.112 "enable": false 00:18:48.113 } 00:18:48.113 }, 00:18:48.113 { 00:18:48.113 "method": "bdev_wait_for_examine" 00:18:48.113 } 00:18:48.113 ] 00:18:48.113 }, 00:18:48.113 { 00:18:48.113 "subsystem": "nbd", 00:18:48.113 "config": [] 00:18:48.113 } 00:18:48.113 ] 00:18:48.113 }' 00:18:48.113 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.113 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.113 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.113 14:59:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.113 [2024-12-11 14:59:41.013820] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:48.113 [2024-12-11 14:59:41.013868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3132704 ] 00:18:48.113 [2024-12-11 14:59:41.088395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.113 [2024-12-11 14:59:41.128898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.371 [2024-12-11 14:59:41.282471] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.939 14:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:48.939 14:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:48.939 14:59:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:48.939 Running I/O for 10 seconds... 00:18:51.253 5428.00 IOPS, 21.20 MiB/s [2024-12-11T13:59:45.240Z] 5498.50 IOPS, 21.48 MiB/s [2024-12-11T13:59:46.177Z] 5418.67 IOPS, 21.17 MiB/s [2024-12-11T13:59:47.114Z] 5450.00 IOPS, 21.29 MiB/s [2024-12-11T13:59:48.051Z] 5461.80 IOPS, 21.34 MiB/s [2024-12-11T13:59:48.987Z] 5452.83 IOPS, 21.30 MiB/s [2024-12-11T13:59:50.364Z] 5468.86 IOPS, 21.36 MiB/s [2024-12-11T13:59:51.300Z] 5497.00 IOPS, 21.47 MiB/s [2024-12-11T13:59:52.248Z] 5436.44 IOPS, 21.24 MiB/s [2024-12-11T13:59:52.248Z] 5449.60 IOPS, 21.29 MiB/s 00:18:59.200 Latency(us) 00:18:59.200 [2024-12-11T13:59:52.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:59.200 Verification LBA range: start 0x0 length 0x2000 00:18:59.200 TLSTESTn1 : 10.01 5455.40 21.31 0.00 0.00 23428.66 4815.47 36016.31 00:18:59.200 [2024-12-11T13:59:52.248Z] =================================================================================================================== 00:18:59.200 [2024-12-11T13:59:52.248Z] Total : 5455.40 21.31 0.00 0.00 23428.66 4815.47 36016.31 00:18:59.200 { 00:18:59.200 "results": [ 00:18:59.200 { 00:18:59.200 "job": "TLSTESTn1", 00:18:59.200 "core_mask": "0x4", 00:18:59.200 "workload": "verify", 00:18:59.200 "status": "finished", 00:18:59.200 "verify_range": { 00:18:59.200 "start": 0, 00:18:59.200 "length": 8192 00:18:59.200 }, 00:18:59.200 "queue_depth": 128, 00:18:59.200 "io_size": 4096, 00:18:59.200 "runtime": 10.012642, 00:18:59.200 "iops": 5455.403279174468, 00:18:59.200 "mibps": 21.310169059275264, 00:18:59.200 "io_failed": 0, 00:18:59.200 "io_timeout": 0, 00:18:59.200 "avg_latency_us": 23428.664471933705, 00:18:59.200 "min_latency_us": 4815.471304347826, 00:18:59.200 "max_latency_us": 36016.30608695652 00:18:59.200 } 00:18:59.200 ], 00:18:59.200 "core_count": 1 00:18:59.200 } 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3132704 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3132704 ']' 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3132704 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132704 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132704' 00:18:59.200 killing process with pid 3132704 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3132704 00:18:59.200 Received shutdown signal, test time was about 10.000000 seconds 00:18:59.200 00:18:59.200 Latency(us) 00:18:59.200 [2024-12-11T13:59:52.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.200 [2024-12-11T13:59:52.248Z] =================================================================================================================== 00:18:59.200 [2024-12-11T13:59:52.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3132704 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3132523 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3132523 ']' 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3132523 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:59.200 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132523 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132523' 00:18:59.459 killing process with pid 3132523 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3132523 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3132523 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3134612 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3134612 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3134612 ']' 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.459 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.459 [2024-12-11 14:59:52.504669] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:18:59.459 [2024-12-11 14:59:52.504716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.718 [2024-12-11 14:59:52.581996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.718 [2024-12-11 14:59:52.619243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.718 [2024-12-11 14:59:52.619280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.718 [2024-12-11 14:59:52.619290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.718 [2024-12-11 14:59:52.619296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.718 [2024-12-11 14:59:52.619303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.718 [2024-12-11 14:59:52.619868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.54Dc5b2lHL 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.54Dc5b2lHL 00:18:59.718 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:59.977 [2024-12-11 14:59:52.928372] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:59.977 14:59:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:00.235 14:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:00.494 [2024-12-11 14:59:53.321390] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.494 [2024-12-11 14:59:53.321607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.494 14:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:00.494 malloc0 00:19:00.752 14:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:00.752 14:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:19:01.010 14:59:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3134871 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3134871 /var/tmp/bdevperf.sock 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3134871 ']' 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.268 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:01.268 [2024-12-11 14:59:54.212385] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:01.268 [2024-12-11 14:59:54.212431] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3134871 ] 00:19:01.268 [2024-12-11 14:59:54.285397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.527 [2024-12-11 14:59:54.326055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.527 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:01.527 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:01.527 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:19:01.785 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:01.785 [2024-12-11 14:59:54.791594] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.042 nvme0n1 00:19:02.042 14:59:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:02.042 Running I/O for 1 seconds... 00:19:02.981 5058.00 IOPS, 19.76 MiB/s 00:19:02.981 Latency(us) 00:19:02.981 [2024-12-11T13:59:56.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.981 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.981 Verification LBA range: start 0x0 length 0x2000 00:19:02.981 nvme0n1 : 1.01 5118.94 20.00 0.00 0.00 24834.76 5014.93 27582.11 00:19:02.981 [2024-12-11T13:59:56.029Z] =================================================================================================================== 00:19:02.981 [2024-12-11T13:59:56.029Z] Total : 5118.94 20.00 0.00 0.00 24834.76 5014.93 27582.11 00:19:02.981 { 00:19:02.981 "results": [ 00:19:02.981 { 00:19:02.981 "job": "nvme0n1", 00:19:02.981 "core_mask": "0x2", 00:19:02.981 "workload": "verify", 00:19:02.981 "status": "finished", 00:19:02.981 "verify_range": { 00:19:02.981 "start": 0, 00:19:02.981 "length": 8192 00:19:02.981 }, 00:19:02.981 "queue_depth": 128, 00:19:02.981 "io_size": 4096, 00:19:02.981 "runtime": 1.0131, 00:19:02.981 "iops": 5118.941861612871, 00:19:02.981 "mibps": 19.99586664692528, 00:19:02.981 "io_failed": 0, 00:19:02.981 "io_timeout": 0, 00:19:02.981 "avg_latency_us": 24834.761103304885, 00:19:02.981 "min_latency_us": 5014.928695652174, 00:19:02.981 "max_latency_us": 27582.107826086958 00:19:02.981 } 00:19:02.981 ], 00:19:02.981 "core_count": 1 00:19:02.981 } 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3134871 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3134871 ']' 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3134871 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:02.981 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134871 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134871' 00:19:03.239 killing process with pid 3134871 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3134871 00:19:03.239 Received shutdown signal, test time was about 1.000000 seconds 00:19:03.239 00:19:03.239 Latency(us) 00:19:03.239 [2024-12-11T13:59:56.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.239 [2024-12-11T13:59:56.287Z] =================================================================================================================== 00:19:03.239 [2024-12-11T13:59:56.287Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3134871 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3134612 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3134612 ']' 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3134612 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3134612 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3134612' 00:19:03.239 killing process with pid 3134612 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3134612 00:19:03.239 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3134612 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3135216 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3135216 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3135216 ']' 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:03.498 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.498 [2024-12-11 14:59:56.500626] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:03.498 [2024-12-11 14:59:56.500675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.757 [2024-12-11 14:59:56.580276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.757 [2024-12-11 14:59:56.620242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.757 [2024-12-11 14:59:56.620277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.757 [2024-12-11 14:59:56.620285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.757 [2024-12-11 14:59:56.620292] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.757 [2024-12-11 14:59:56.620297] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.757 [2024-12-11 14:59:56.620893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.757 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.757 [2024-12-11 14:59:56.764989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.757 malloc0 00:19:03.757 [2024-12-11 14:59:56.793020] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.757 [2024-12-11 14:59:56.793248] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3135361 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3135361 /var/tmp/bdevperf.sock 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3135361 ']' 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.016 14:59:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.016 [2024-12-11 14:59:56.868123] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:04.016 [2024-12-11 14:59:56.868176] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135361 ] 00:19:04.016 [2024-12-11 14:59:56.942969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.017 [2024-12-11 14:59:56.984437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.275 14:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.275 14:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:04.275 14:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.54Dc5b2lHL 00:19:04.275 14:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:04.534 [2024-12-11 14:59:57.445052] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:04.534 nvme0n1 00:19:04.534 14:59:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:04.792 Running I/O for 1 seconds... 00:19:05.729 5277.00 IOPS, 20.61 MiB/s 00:19:05.729 Latency(us) 00:19:05.729 [2024-12-11T13:59:58.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.729 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:05.729 Verification LBA range: start 0x0 length 0x2000 00:19:05.729 nvme0n1 : 1.02 5307.18 20.73 0.00 0.00 23928.85 4843.97 28607.89 00:19:05.729 [2024-12-11T13:59:58.777Z] =================================================================================================================== 00:19:05.729 [2024-12-11T13:59:58.777Z] Total : 5307.18 20.73 0.00 0.00 23928.85 4843.97 28607.89 00:19:05.729 { 00:19:05.729 "results": [ 00:19:05.729 { 00:19:05.729 "job": "nvme0n1", 00:19:05.729 "core_mask": "0x2", 00:19:05.729 "workload": "verify", 00:19:05.729 "status": "finished", 00:19:05.729 "verify_range": { 00:19:05.729 "start": 0, 00:19:05.729 "length": 8192 00:19:05.729 }, 00:19:05.729 "queue_depth": 128, 00:19:05.729 "io_size": 4096, 00:19:05.729 "runtime": 1.018431, 00:19:05.729 "iops": 5307.183304514493, 00:19:05.729 "mibps": 20.73118478325974, 00:19:05.729 "io_failed": 0, 00:19:05.729 "io_timeout": 0, 00:19:05.729 "avg_latency_us": 23928.84550890882, 00:19:05.729 "min_latency_us": 4843.965217391305, 00:19:05.729 "max_latency_us": 28607.888695652175 00:19:05.729 } 00:19:05.729 ], 00:19:05.729 "core_count": 1 00:19:05.729 } 00:19:05.729 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:05.729 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.729 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.988 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.988 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:05.988 "subsystems": [ 00:19:05.988 { 00:19:05.988 "subsystem": "keyring", 00:19:05.988 "config": [ 00:19:05.988 { 00:19:05.988 "method": "keyring_file_add_key", 00:19:05.988 "params": { 00:19:05.988 "name": "key0", 00:19:05.988 "path": "/tmp/tmp.54Dc5b2lHL" 00:19:05.988 } 00:19:05.988 } 00:19:05.988 ] 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "subsystem": "iobuf", 00:19:05.988 "config": [ 00:19:05.988 { 00:19:05.988 "method": "iobuf_set_options", 00:19:05.988 "params": { 00:19:05.988 "small_pool_count": 8192, 00:19:05.988 "large_pool_count": 1024, 00:19:05.988 "small_bufsize": 8192, 00:19:05.988 "large_bufsize": 135168, 00:19:05.988 "enable_numa": false 00:19:05.988 } 00:19:05.988 } 00:19:05.988 ] 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "subsystem": "sock", 00:19:05.988 "config": [ 00:19:05.988 { 00:19:05.988 "method": "sock_set_default_impl", 00:19:05.988 "params": { 00:19:05.988 "impl_name": "posix" 00:19:05.988 } 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "method": "sock_impl_set_options", 00:19:05.988 "params": { 00:19:05.988 "impl_name": "ssl", 00:19:05.988 "recv_buf_size": 4096, 00:19:05.988 "send_buf_size": 4096, 00:19:05.988 "enable_recv_pipe": true, 00:19:05.988 "enable_quickack": false, 00:19:05.988 "enable_placement_id": 0, 00:19:05.988 "enable_zerocopy_send_server": true, 00:19:05.988 "enable_zerocopy_send_client": false, 00:19:05.988 "zerocopy_threshold": 0, 00:19:05.988 "tls_version": 0, 00:19:05.988 "enable_ktls": false 00:19:05.988 } 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "method": "sock_impl_set_options", 00:19:05.988 "params": { 00:19:05.988 "impl_name": "posix", 00:19:05.988 "recv_buf_size": 2097152, 00:19:05.988 "send_buf_size": 2097152, 00:19:05.988 "enable_recv_pipe": true, 00:19:05.988 "enable_quickack": false, 00:19:05.988 "enable_placement_id": 0, 00:19:05.988 "enable_zerocopy_send_server": true, 00:19:05.988 "enable_zerocopy_send_client": false, 00:19:05.988 "zerocopy_threshold": 0, 00:19:05.988 "tls_version": 0, 00:19:05.988 "enable_ktls": false 00:19:05.988 } 00:19:05.988 } 00:19:05.988 ] 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "subsystem": "vmd", 00:19:05.988 "config": [] 00:19:05.988 }, 00:19:05.988 { 00:19:05.988 "subsystem": "accel", 00:19:05.988 "config": [ 00:19:05.988 { 00:19:05.988 "method": "accel_set_options", 00:19:05.988 "params": { 00:19:05.988 "small_cache_size": 128, 00:19:05.988 "large_cache_size": 16, 00:19:05.988 "task_count": 2048, 00:19:05.988 "sequence_count": 2048, 00:19:05.988 "buf_count": 2048 00:19:05.988 } 00:19:05.988 } 00:19:05.989 ] 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "subsystem": "bdev", 00:19:05.989 "config": [ 00:19:05.989 { 00:19:05.989 "method": "bdev_set_options", 00:19:05.989 "params": { 00:19:05.989 "bdev_io_pool_size": 65535, 00:19:05.989 "bdev_io_cache_size": 256, 00:19:05.989 "bdev_auto_examine": true, 00:19:05.989 "iobuf_small_cache_size": 128, 00:19:05.989 "iobuf_large_cache_size": 16 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_raid_set_options", 00:19:05.989 "params": { 00:19:05.989 "process_window_size_kb": 1024, 00:19:05.989 "process_max_bandwidth_mb_sec": 0 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_iscsi_set_options", 00:19:05.989 "params": { 00:19:05.989 "timeout_sec": 30 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_nvme_set_options", 00:19:05.989 "params": { 00:19:05.989 "action_on_timeout": "none", 00:19:05.989 "timeout_us": 0, 00:19:05.989 "timeout_admin_us": 0, 00:19:05.989 "keep_alive_timeout_ms": 10000, 00:19:05.989 "arbitration_burst": 0, 00:19:05.989 "low_priority_weight": 0, 00:19:05.989 "medium_priority_weight": 0, 00:19:05.989 "high_priority_weight": 0, 00:19:05.989 "nvme_adminq_poll_period_us": 10000, 00:19:05.989 "nvme_ioq_poll_period_us": 0, 00:19:05.989 "io_queue_requests": 0, 00:19:05.989 "delay_cmd_submit": true, 00:19:05.989 "transport_retry_count": 4, 00:19:05.989 "bdev_retry_count": 3, 00:19:05.989 "transport_ack_timeout": 0, 00:19:05.989 "ctrlr_loss_timeout_sec": 0, 00:19:05.989 "reconnect_delay_sec": 0, 00:19:05.989 "fast_io_fail_timeout_sec": 0, 00:19:05.989 "disable_auto_failback": false, 00:19:05.989 "generate_uuids": false, 00:19:05.989 "transport_tos": 0, 00:19:05.989 "nvme_error_stat": false, 00:19:05.989 "rdma_srq_size": 0, 00:19:05.989 "io_path_stat": false, 00:19:05.989 "allow_accel_sequence": false, 00:19:05.989 "rdma_max_cq_size": 0, 00:19:05.989 "rdma_cm_event_timeout_ms": 0, 00:19:05.989 "dhchap_digests": [ 00:19:05.989 "sha256", 00:19:05.989 "sha384", 00:19:05.989 "sha512" 00:19:05.989 ], 00:19:05.989 "dhchap_dhgroups": [ 00:19:05.989 "null", 00:19:05.989 "ffdhe2048", 00:19:05.989 "ffdhe3072", 00:19:05.989 "ffdhe4096", 00:19:05.989 "ffdhe6144", 00:19:05.989 "ffdhe8192" 00:19:05.989 ], 00:19:05.989 "rdma_umr_per_io": false 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_nvme_set_hotplug", 00:19:05.989 "params": { 00:19:05.989 "period_us": 100000, 00:19:05.989 "enable": false 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_malloc_create", 00:19:05.989 "params": { 00:19:05.989 "name": "malloc0", 00:19:05.989 "num_blocks": 8192, 00:19:05.989 "block_size": 4096, 00:19:05.989 "physical_block_size": 4096, 00:19:05.989 "uuid": "e6e82592-8ebb-4679-9a7f-fe909fc25a3c", 00:19:05.989 "optimal_io_boundary": 0, 00:19:05.989 "md_size": 0, 00:19:05.989 "dif_type": 0, 00:19:05.989 "dif_is_head_of_md": false, 00:19:05.989 "dif_pi_format": 0 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "bdev_wait_for_examine" 00:19:05.989 } 00:19:05.989 ] 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "subsystem": "nbd", 00:19:05.989 "config": [] 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "subsystem": "scheduler", 00:19:05.989 "config": [ 00:19:05.989 { 00:19:05.989 "method": "framework_set_scheduler", 00:19:05.989 "params": { 00:19:05.989 "name": "static" 00:19:05.989 } 00:19:05.989 } 00:19:05.989 ] 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "subsystem": "nvmf", 00:19:05.989 "config": [ 00:19:05.989 { 00:19:05.989 "method": "nvmf_set_config", 00:19:05.989 "params": { 00:19:05.989 "discovery_filter": "match_any", 00:19:05.989 "admin_cmd_passthru": { 00:19:05.989 "identify_ctrlr": false 00:19:05.989 }, 00:19:05.989 "dhchap_digests": [ 00:19:05.989 "sha256", 00:19:05.989 "sha384", 00:19:05.989 "sha512" 00:19:05.989 ], 00:19:05.989 "dhchap_dhgroups": [ 00:19:05.989 "null", 00:19:05.989 "ffdhe2048", 00:19:05.989 "ffdhe3072", 00:19:05.989 "ffdhe4096", 00:19:05.989 "ffdhe6144", 00:19:05.989 "ffdhe8192" 00:19:05.989 ] 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_set_max_subsystems", 00:19:05.989 "params": { 00:19:05.989 "max_subsystems": 1024 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_set_crdt", 00:19:05.989 "params": { 00:19:05.989 "crdt1": 0, 00:19:05.989 "crdt2": 0, 00:19:05.989 "crdt3": 0 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_create_transport", 00:19:05.989 "params": { 00:19:05.989 "trtype": "TCP", 00:19:05.989 "max_queue_depth": 128, 00:19:05.989 "max_io_qpairs_per_ctrlr": 127, 00:19:05.989 "in_capsule_data_size": 4096, 00:19:05.989 "max_io_size": 131072, 00:19:05.989 "io_unit_size": 131072, 00:19:05.989 "max_aq_depth": 128, 00:19:05.989 "num_shared_buffers": 511, 00:19:05.989 "buf_cache_size": 4294967295, 00:19:05.989 "dif_insert_or_strip": false, 00:19:05.989 "zcopy": false, 00:19:05.989 "c2h_success": false, 00:19:05.989 "sock_priority": 0, 00:19:05.989 "abort_timeout_sec": 1, 00:19:05.989 "ack_timeout": 0, 00:19:05.989 "data_wr_pool_size": 0 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_create_subsystem", 00:19:05.989 "params": { 00:19:05.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.989 "allow_any_host": false, 00:19:05.989 "serial_number": "00000000000000000000", 00:19:05.989 "model_number": "SPDK bdev Controller", 00:19:05.989 "max_namespaces": 32, 00:19:05.989 "min_cntlid": 1, 00:19:05.989 "max_cntlid": 65519, 00:19:05.989 "ana_reporting": false 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_subsystem_add_host", 00:19:05.989 "params": { 00:19:05.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.989 "host": "nqn.2016-06.io.spdk:host1", 00:19:05.989 "psk": "key0" 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_subsystem_add_ns", 00:19:05.989 "params": { 00:19:05.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.989 "namespace": { 00:19:05.989 "nsid": 1, 00:19:05.989 "bdev_name": "malloc0", 00:19:05.989 "nguid": "E6E825928EBB46799A7FFE909FC25A3C", 00:19:05.989 "uuid": "e6e82592-8ebb-4679-9a7f-fe909fc25a3c", 00:19:05.989 "no_auto_visible": false 00:19:05.989 } 00:19:05.989 } 00:19:05.989 }, 00:19:05.989 { 00:19:05.989 "method": "nvmf_subsystem_add_listener", 00:19:05.989 "params": { 00:19:05.989 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.989 "listen_address": { 00:19:05.989 "trtype": "TCP", 00:19:05.989 "adrfam": "IPv4", 00:19:05.989 "traddr": "10.0.0.2", 00:19:05.989 "trsvcid": "4420" 00:19:05.989 }, 00:19:05.989 "secure_channel": false, 00:19:05.989 "sock_impl": "ssl" 00:19:05.989 } 00:19:05.989 } 00:19:05.989 ] 00:19:05.989 } 00:19:05.989 ] 00:19:05.989 }' 00:19:05.989 14:59:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:06.249 "subsystems": [ 00:19:06.249 { 00:19:06.249 "subsystem": "keyring", 00:19:06.249 "config": [ 00:19:06.249 { 00:19:06.249 "method": "keyring_file_add_key", 00:19:06.249 "params": { 00:19:06.249 "name": "key0", 00:19:06.249 "path": "/tmp/tmp.54Dc5b2lHL" 00:19:06.249 } 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "iobuf", 00:19:06.249 "config": [ 00:19:06.249 { 00:19:06.249 "method": "iobuf_set_options", 00:19:06.249 "params": { 00:19:06.249 "small_pool_count": 8192, 00:19:06.249 "large_pool_count": 1024, 00:19:06.249 "small_bufsize": 8192, 00:19:06.249 "large_bufsize": 135168, 00:19:06.249 "enable_numa": false 00:19:06.249 } 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "sock", 00:19:06.249 "config": [ 00:19:06.249 { 00:19:06.249 "method": "sock_set_default_impl", 00:19:06.249 "params": { 00:19:06.249 "impl_name": "posix" 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "sock_impl_set_options", 00:19:06.249 "params": { 00:19:06.249 "impl_name": "ssl", 00:19:06.249 "recv_buf_size": 4096, 00:19:06.249 "send_buf_size": 4096, 00:19:06.249 "enable_recv_pipe": true, 00:19:06.249 "enable_quickack": false, 00:19:06.249 "enable_placement_id": 0, 00:19:06.249 "enable_zerocopy_send_server": true, 00:19:06.249 "enable_zerocopy_send_client": false, 00:19:06.249 "zerocopy_threshold": 0, 00:19:06.249 "tls_version": 0, 00:19:06.249 "enable_ktls": false 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "sock_impl_set_options", 00:19:06.249 "params": { 00:19:06.249 "impl_name": "posix", 00:19:06.249 "recv_buf_size": 2097152, 00:19:06.249 "send_buf_size": 2097152, 00:19:06.249 "enable_recv_pipe": true, 00:19:06.249 "enable_quickack": false, 00:19:06.249 "enable_placement_id": 0, 00:19:06.249 "enable_zerocopy_send_server": true, 00:19:06.249 "enable_zerocopy_send_client": false, 00:19:06.249 "zerocopy_threshold": 0, 00:19:06.249 "tls_version": 0, 00:19:06.249 "enable_ktls": false 00:19:06.249 } 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "vmd", 00:19:06.249 "config": [] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "accel", 00:19:06.249 "config": [ 00:19:06.249 { 00:19:06.249 "method": "accel_set_options", 00:19:06.249 "params": { 00:19:06.249 "small_cache_size": 128, 00:19:06.249 "large_cache_size": 16, 00:19:06.249 "task_count": 2048, 00:19:06.249 "sequence_count": 2048, 00:19:06.249 "buf_count": 2048 00:19:06.249 } 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "bdev", 00:19:06.249 "config": [ 00:19:06.249 { 00:19:06.249 "method": "bdev_set_options", 00:19:06.249 "params": { 00:19:06.249 "bdev_io_pool_size": 65535, 00:19:06.249 "bdev_io_cache_size": 256, 00:19:06.249 "bdev_auto_examine": true, 00:19:06.249 "iobuf_small_cache_size": 128, 00:19:06.249 "iobuf_large_cache_size": 16 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_raid_set_options", 00:19:06.249 "params": { 00:19:06.249 "process_window_size_kb": 1024, 00:19:06.249 "process_max_bandwidth_mb_sec": 0 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_iscsi_set_options", 00:19:06.249 "params": { 00:19:06.249 "timeout_sec": 30 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_nvme_set_options", 00:19:06.249 "params": { 00:19:06.249 "action_on_timeout": "none", 00:19:06.249 "timeout_us": 0, 00:19:06.249 "timeout_admin_us": 0, 00:19:06.249 "keep_alive_timeout_ms": 10000, 00:19:06.249 "arbitration_burst": 0, 00:19:06.249 "low_priority_weight": 0, 00:19:06.249 "medium_priority_weight": 0, 00:19:06.249 "high_priority_weight": 0, 00:19:06.249 "nvme_adminq_poll_period_us": 10000, 00:19:06.249 "nvme_ioq_poll_period_us": 0, 00:19:06.249 "io_queue_requests": 512, 00:19:06.249 "delay_cmd_submit": true, 00:19:06.249 "transport_retry_count": 4, 00:19:06.249 "bdev_retry_count": 3, 00:19:06.249 "transport_ack_timeout": 0, 00:19:06.249 "ctrlr_loss_timeout_sec": 0, 00:19:06.249 "reconnect_delay_sec": 0, 00:19:06.249 "fast_io_fail_timeout_sec": 0, 00:19:06.249 "disable_auto_failback": false, 00:19:06.249 "generate_uuids": false, 00:19:06.249 "transport_tos": 0, 00:19:06.249 "nvme_error_stat": false, 00:19:06.249 "rdma_srq_size": 0, 00:19:06.249 "io_path_stat": false, 00:19:06.249 "allow_accel_sequence": false, 00:19:06.249 "rdma_max_cq_size": 0, 00:19:06.249 "rdma_cm_event_timeout_ms": 0, 00:19:06.249 "dhchap_digests": [ 00:19:06.249 "sha256", 00:19:06.249 "sha384", 00:19:06.249 "sha512" 00:19:06.249 ], 00:19:06.249 "dhchap_dhgroups": [ 00:19:06.249 "null", 00:19:06.249 "ffdhe2048", 00:19:06.249 "ffdhe3072", 00:19:06.249 "ffdhe4096", 00:19:06.249 "ffdhe6144", 00:19:06.249 "ffdhe8192" 00:19:06.249 ], 00:19:06.249 "rdma_umr_per_io": false 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_nvme_attach_controller", 00:19:06.249 "params": { 00:19:06.249 "name": "nvme0", 00:19:06.249 "trtype": "TCP", 00:19:06.249 "adrfam": "IPv4", 00:19:06.249 "traddr": "10.0.0.2", 00:19:06.249 "trsvcid": "4420", 00:19:06.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.249 "prchk_reftag": false, 00:19:06.249 "prchk_guard": false, 00:19:06.249 "ctrlr_loss_timeout_sec": 0, 00:19:06.249 "reconnect_delay_sec": 0, 00:19:06.249 "fast_io_fail_timeout_sec": 0, 00:19:06.249 "psk": "key0", 00:19:06.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:06.249 "hdgst": false, 00:19:06.249 "ddgst": false, 00:19:06.249 "multipath": "multipath" 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_nvme_set_hotplug", 00:19:06.249 "params": { 00:19:06.249 "period_us": 100000, 00:19:06.249 "enable": false 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_enable_histogram", 00:19:06.249 "params": { 00:19:06.249 "name": "nvme0n1", 00:19:06.249 "enable": true 00:19:06.249 } 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "method": "bdev_wait_for_examine" 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }, 00:19:06.249 { 00:19:06.249 "subsystem": "nbd", 00:19:06.249 "config": [] 00:19:06.249 } 00:19:06.249 ] 00:19:06.249 }' 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3135361 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3135361 ']' 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3135361 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135361 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135361' 00:19:06.249 killing process with pid 3135361 00:19:06.249 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3135361 00:19:06.249 Received shutdown signal, test time was about 1.000000 seconds 00:19:06.249 00:19:06.249 Latency(us) 00:19:06.249 [2024-12-11T13:59:59.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.249 [2024-12-11T13:59:59.297Z] =================================================================================================================== 00:19:06.249 [2024-12-11T13:59:59.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3135361 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3135216 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3135216 ']' 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3135216 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.250 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135216 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135216' 00:19:06.509 killing process with pid 3135216 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3135216 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3135216 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.509 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:06.509 "subsystems": [ 00:19:06.509 { 00:19:06.509 "subsystem": "keyring", 00:19:06.509 "config": [ 00:19:06.509 { 00:19:06.509 "method": "keyring_file_add_key", 00:19:06.509 "params": { 00:19:06.509 "name": "key0", 00:19:06.509 "path": "/tmp/tmp.54Dc5b2lHL" 00:19:06.509 } 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "subsystem": "iobuf", 00:19:06.509 "config": [ 00:19:06.509 { 00:19:06.509 "method": "iobuf_set_options", 00:19:06.509 "params": { 00:19:06.509 "small_pool_count": 8192, 00:19:06.509 "large_pool_count": 1024, 00:19:06.509 "small_bufsize": 8192, 00:19:06.509 "large_bufsize": 135168, 00:19:06.509 "enable_numa": false 00:19:06.509 } 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "subsystem": "sock", 00:19:06.509 "config": [ 00:19:06.509 { 00:19:06.509 "method": "sock_set_default_impl", 00:19:06.509 "params": { 00:19:06.509 "impl_name": "posix" 00:19:06.509 } 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "method": "sock_impl_set_options", 00:19:06.509 "params": { 00:19:06.509 "impl_name": "ssl", 00:19:06.509 "recv_buf_size": 4096, 00:19:06.509 "send_buf_size": 4096, 00:19:06.509 "enable_recv_pipe": true, 00:19:06.509 "enable_quickack": false, 00:19:06.509 "enable_placement_id": 0, 00:19:06.509 "enable_zerocopy_send_server": true, 00:19:06.509 "enable_zerocopy_send_client": false, 00:19:06.509 "zerocopy_threshold": 0, 00:19:06.509 "tls_version": 0, 00:19:06.509 "enable_ktls": false 00:19:06.509 } 00:19:06.509 }, 00:19:06.509 { 00:19:06.509 "method": "sock_impl_set_options", 00:19:06.509 "params": { 00:19:06.509 "impl_name": "posix", 00:19:06.509 "recv_buf_size": 2097152, 00:19:06.509 "send_buf_size": 2097152, 00:19:06.509 "enable_recv_pipe": true, 00:19:06.509 "enable_quickack": false, 00:19:06.509 "enable_placement_id": 0, 00:19:06.509 "enable_zerocopy_send_server": true, 00:19:06.509 "enable_zerocopy_send_client": false, 00:19:06.509 "zerocopy_threshold": 0, 00:19:06.509 "tls_version": 0, 00:19:06.509 "enable_ktls": false 00:19:06.509 } 00:19:06.509 } 00:19:06.509 ] 00:19:06.509 }, 00:19:06.510 { 00:19:06.510 "subsystem": "vmd", 00:19:06.510 "config": [] 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "subsystem": "accel", 00:19:06.510 "config": [ 00:19:06.510 { 00:19:06.510 "method": "accel_set_options", 00:19:06.510 "params": { 00:19:06.510 "small_cache_size": 128, 00:19:06.510 "large_cache_size": 16, 00:19:06.510 "task_count": 2048, 00:19:06.510 "sequence_count": 2048, 00:19:06.510 "buf_count": 2048 00:19:06.510 } 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "subsystem": "bdev", 00:19:06.510 "config": [ 00:19:06.510 { 00:19:06.510 "method": "bdev_set_options", 00:19:06.510 "params": { 00:19:06.510 "bdev_io_pool_size": 65535, 00:19:06.510 "bdev_io_cache_size": 256, 00:19:06.510 "bdev_auto_examine": true, 00:19:06.510 "iobuf_small_cache_size": 128, 00:19:06.510 "iobuf_large_cache_size": 16 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_raid_set_options", 00:19:06.510 "params": { 00:19:06.510 "process_window_size_kb": 1024, 00:19:06.510 "process_max_bandwidth_mb_sec": 0 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_iscsi_set_options", 00:19:06.510 "params": { 00:19:06.510 "timeout_sec": 30 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_nvme_set_options", 00:19:06.510 "params": { 00:19:06.510 "action_on_timeout": "none", 00:19:06.510 "timeout_us": 0, 00:19:06.510 "timeout_admin_us": 0, 00:19:06.510 "keep_alive_timeout_ms": 10000, 00:19:06.510 "arbitration_burst": 0, 00:19:06.510 "low_priority_weight": 0, 00:19:06.510 "medium_priority_weight": 0, 00:19:06.510 "high_priority_weight": 0, 00:19:06.510 "nvme_adminq_poll_period_us": 10000, 00:19:06.510 "nvme_ioq_poll_period_us": 0, 00:19:06.510 "io_queue_requests": 0, 00:19:06.510 "delay_cmd_submit": true, 00:19:06.510 "transport_retry_count": 4, 00:19:06.510 "bdev_retry_count": 3, 00:19:06.510 "transport_ack_timeout": 0, 00:19:06.510 "ctrlr_loss_timeout_sec": 0, 00:19:06.510 "reconnect_delay_sec": 0, 00:19:06.510 "fast_io_fail_timeout_sec": 0, 00:19:06.510 "disable_auto_failback": false, 00:19:06.510 "generate_uuids": false, 00:19:06.510 "transport_tos": 0, 00:19:06.510 "nvme_error_stat": false, 00:19:06.510 "rdma_srq_size": 0, 00:19:06.510 "io_path_stat": false, 00:19:06.510 "allow_accel_sequence": false, 00:19:06.510 "rdma_max_cq_size": 0, 00:19:06.510 "rdma_cm_event_timeout_ms": 0, 00:19:06.510 "dhchap_digests": [ 00:19:06.510 "sha256", 00:19:06.510 "sha384", 00:19:06.510 "sha512" 00:19:06.510 ], 00:19:06.510 "dhchap_dhgroups": [ 00:19:06.510 "null", 00:19:06.510 "ffdhe2048", 00:19:06.510 "ffdhe3072", 00:19:06.510 "ffdhe4096", 00:19:06.510 "ffdhe6144", 00:19:06.510 "ffdhe8192" 00:19:06.510 ], 00:19:06.510 "rdma_umr_per_io": false 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_nvme_set_hotplug", 00:19:06.510 "params": { 00:19:06.510 "period_us": 100000, 00:19:06.510 "enable": false 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_malloc_create", 00:19:06.510 "params": { 00:19:06.510 "name": "malloc0", 00:19:06.510 "num_blocks": 8192, 00:19:06.510 "block_size": 4096, 00:19:06.510 "physical_block_size": 4096, 00:19:06.510 "uuid": "e6e82592-8ebb-4679-9a7f-fe909fc25a3c", 00:19:06.510 "optimal_io_boundary": 0, 00:19:06.510 "md_size": 0, 00:19:06.510 "dif_type": 0, 00:19:06.510 "dif_is_head_of_md": false, 00:19:06.510 "dif_pi_format": 0 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "bdev_wait_for_examine" 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "subsystem": "nbd", 00:19:06.510 "config": [] 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "subsystem": "scheduler", 00:19:06.510 "config": [ 00:19:06.510 { 00:19:06.510 "method": "framework_set_scheduler", 00:19:06.510 "params": { 00:19:06.510 "name": "static" 00:19:06.510 } 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "subsystem": "nvmf", 00:19:06.510 "config": [ 00:19:06.510 { 00:19:06.510 "method": "nvmf_set_config", 00:19:06.510 "params": { 00:19:06.510 "discovery_filter": "match_any", 00:19:06.510 "admin_cmd_passthru": { 00:19:06.510 "identify_ctrlr": false 00:19:06.510 }, 00:19:06.510 "dhchap_digests": [ 00:19:06.510 "sha256", 00:19:06.510 "sha384", 00:19:06.510 "sha512" 00:19:06.510 ], 00:19:06.510 "dhchap_dhgroups": [ 00:19:06.510 "null", 00:19:06.510 "ffdhe2048", 00:19:06.510 "ffdhe3072", 00:19:06.510 "ffdhe4096", 00:19:06.510 "ffdhe6144", 00:19:06.510 "ffdhe8192" 00:19:06.510 ] 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_set_max_subsystems", 00:19:06.510 "params": { 00:19:06.510 "max_subsystems": 1024 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_set_crdt", 00:19:06.510 "params": { 00:19:06.510 "crdt1": 0, 00:19:06.510 "crdt2": 0, 00:19:06.510 "crdt3": 0 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_create_transport", 00:19:06.510 "params": { 00:19:06.510 "trtype": "TCP", 00:19:06.510 "max_queue_depth": 128, 00:19:06.510 "max_io_qpairs_per_ctrlr": 127, 00:19:06.510 "in_capsule_data_size": 4096, 00:19:06.510 "max_io_size": 131072, 00:19:06.510 "io_unit_size": 131072, 00:19:06.510 "max_aq_depth": 128, 00:19:06.510 "num_shared_buffers": 511, 00:19:06.510 "buf_cache_size": 4294967295, 00:19:06.510 "dif_insert_or_strip": false, 00:19:06.510 "zcopy": false, 00:19:06.510 "c2h_success": false, 00:19:06.510 "sock_priority": 0, 00:19:06.510 "abort_timeout_sec": 1, 00:19:06.510 "ack_timeout": 0, 00:19:06.510 "data_wr_pool_size": 0 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_create_subsystem", 00:19:06.510 "params": { 00:19:06.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.510 "allow_any_host": false, 00:19:06.510 "serial_number": "00000000000000000000", 00:19:06.510 "model_number": "SPDK bdev Controller", 00:19:06.510 "max_namespaces": 32, 00:19:06.510 "min_cntlid": 1, 00:19:06.510 "max_cntlid": 65519, 00:19:06.510 "ana_reporting": false 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_subsystem_add_host", 00:19:06.510 "params": { 00:19:06.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.510 "host": "nqn.2016-06.io.spdk:host1", 00:19:06.510 "psk": "key0" 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_subsystem_add_ns", 00:19:06.510 "params": { 00:19:06.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.510 "namespace": { 00:19:06.510 "nsid": 1, 00:19:06.510 "bdev_name": "malloc0", 00:19:06.510 "nguid": "E6E825928EBB46799A7FFE909FC25A3C", 00:19:06.510 "uuid": "e6e82592-8ebb-4679-9a7f-fe909fc25a3c", 00:19:06.510 "no_auto_visible": false 00:19:06.510 } 00:19:06.510 } 00:19:06.510 }, 00:19:06.510 { 00:19:06.510 "method": "nvmf_subsystem_add_listener", 00:19:06.510 "params": { 00:19:06.510 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:06.510 "listen_address": { 00:19:06.510 "trtype": "TCP", 00:19:06.510 "adrfam": "IPv4", 00:19:06.510 "traddr": "10.0.0.2", 00:19:06.510 "trsvcid": "4420" 00:19:06.510 }, 00:19:06.510 "secure_channel": false, 00:19:06.510 "sock_impl": "ssl" 00:19:06.510 } 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 } 00:19:06.510 ] 00:19:06.510 }' 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3135836 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3135836 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3135836 ']' 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.510 14:59:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.510 [2024-12-11 14:59:59.552096] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:06.510 [2024-12-11 14:59:59.552147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.770 [2024-12-11 14:59:59.630323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.770 [2024-12-11 14:59:59.664946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.770 [2024-12-11 14:59:59.664984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.770 [2024-12-11 14:59:59.664992] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.770 [2024-12-11 14:59:59.664997] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.770 [2024-12-11 14:59:59.665002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.770 [2024-12-11 14:59:59.665629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.028 [2024-12-11 14:59:59.880045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.028 [2024-12-11 14:59:59.912079] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.028 [2024-12-11 14:59:59.912313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3135905 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3135905 /var/tmp/bdevperf.sock 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:07.597 "subsystems": [ 00:19:07.597 { 00:19:07.597 "subsystem": "keyring", 00:19:07.597 "config": [ 00:19:07.597 { 00:19:07.597 "method": "keyring_file_add_key", 00:19:07.597 "params": { 00:19:07.597 "name": "key0", 00:19:07.597 "path": "/tmp/tmp.54Dc5b2lHL" 00:19:07.597 } 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "iobuf", 00:19:07.597 "config": [ 00:19:07.597 { 00:19:07.597 "method": "iobuf_set_options", 00:19:07.597 "params": { 00:19:07.597 "small_pool_count": 8192, 00:19:07.597 "large_pool_count": 1024, 00:19:07.597 "small_bufsize": 8192, 00:19:07.597 "large_bufsize": 135168, 00:19:07.597 "enable_numa": false 00:19:07.597 } 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "sock", 00:19:07.597 "config": [ 00:19:07.597 { 00:19:07.597 "method": "sock_set_default_impl", 00:19:07.597 "params": { 00:19:07.597 "impl_name": "posix" 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "sock_impl_set_options", 00:19:07.597 "params": { 00:19:07.597 "impl_name": "ssl", 00:19:07.597 "recv_buf_size": 4096, 00:19:07.597 "send_buf_size": 4096, 00:19:07.597 "enable_recv_pipe": true, 00:19:07.597 "enable_quickack": false, 00:19:07.597 "enable_placement_id": 0, 00:19:07.597 "enable_zerocopy_send_server": true, 00:19:07.597 "enable_zerocopy_send_client": false, 00:19:07.597 "zerocopy_threshold": 0, 00:19:07.597 "tls_version": 0, 00:19:07.597 "enable_ktls": false 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "sock_impl_set_options", 00:19:07.597 "params": { 00:19:07.597 "impl_name": "posix", 00:19:07.597 "recv_buf_size": 2097152, 00:19:07.597 "send_buf_size": 2097152, 00:19:07.597 "enable_recv_pipe": true, 00:19:07.597 "enable_quickack": false, 00:19:07.597 "enable_placement_id": 0, 00:19:07.597 "enable_zerocopy_send_server": true, 00:19:07.597 "enable_zerocopy_send_client": false, 00:19:07.597 "zerocopy_threshold": 0, 00:19:07.597 "tls_version": 0, 00:19:07.597 "enable_ktls": false 00:19:07.597 } 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "vmd", 00:19:07.597 "config": [] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "accel", 00:19:07.597 "config": [ 00:19:07.597 { 00:19:07.597 "method": "accel_set_options", 00:19:07.597 "params": { 00:19:07.597 "small_cache_size": 128, 00:19:07.597 "large_cache_size": 16, 00:19:07.597 "task_count": 2048, 00:19:07.597 "sequence_count": 2048, 00:19:07.597 "buf_count": 2048 00:19:07.597 } 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "bdev", 00:19:07.597 "config": [ 00:19:07.597 { 00:19:07.597 "method": "bdev_set_options", 00:19:07.597 "params": { 00:19:07.597 "bdev_io_pool_size": 65535, 00:19:07.597 "bdev_io_cache_size": 256, 00:19:07.597 "bdev_auto_examine": true, 00:19:07.597 "iobuf_small_cache_size": 128, 00:19:07.597 "iobuf_large_cache_size": 16 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_raid_set_options", 00:19:07.597 "params": { 00:19:07.597 "process_window_size_kb": 1024, 00:19:07.597 "process_max_bandwidth_mb_sec": 0 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_iscsi_set_options", 00:19:07.597 "params": { 00:19:07.597 "timeout_sec": 30 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_nvme_set_options", 00:19:07.597 "params": { 00:19:07.597 "action_on_timeout": "none", 00:19:07.597 "timeout_us": 0, 00:19:07.597 "timeout_admin_us": 0, 00:19:07.597 "keep_alive_timeout_ms": 10000, 00:19:07.597 "arbitration_burst": 0, 00:19:07.597 "low_priority_weight": 0, 00:19:07.597 "medium_priority_weight": 0, 00:19:07.597 "high_priority_weight": 0, 00:19:07.597 "nvme_adminq_poll_period_us": 10000, 00:19:07.597 "nvme_ioq_poll_period_us": 0, 00:19:07.597 "io_queue_requests": 512, 00:19:07.597 "delay_cmd_submit": true, 00:19:07.597 "transport_retry_count": 4, 00:19:07.597 "bdev_retry_count": 3, 00:19:07.597 "transport_ack_timeout": 0, 00:19:07.597 "ctrlr_loss_timeout_sec": 0, 00:19:07.597 "reconnect_delay_sec": 0, 00:19:07.597 "fast_io_fail_timeout_sec": 0, 00:19:07.597 "disable_auto_failback": false, 00:19:07.597 "generate_uuids": false, 00:19:07.597 "transport_tos": 0, 00:19:07.597 "nvme_error_stat": false, 00:19:07.597 "rdma_srq_size": 0, 00:19:07.597 "io_path_stat": false, 00:19:07.597 "allow_accel_sequence": false, 00:19:07.597 "rdma_max_cq_size": 0, 00:19:07.597 "rdma_cm_event_timeout_ms": 0, 00:19:07.597 "dhchap_digests": [ 00:19:07.597 "sha256", 00:19:07.597 "sha384", 00:19:07.597 "sha512" 00:19:07.597 ], 00:19:07.597 "dhchap_dhgroups": [ 00:19:07.597 "null", 00:19:07.597 "ffdhe2048", 00:19:07.597 "ffdhe3072", 00:19:07.597 "ffdhe4096", 00:19:07.597 "ffdhe6144", 00:19:07.597 "ffdhe8192" 00:19:07.597 ], 00:19:07.597 "rdma_umr_per_io": false 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_nvme_attach_controller", 00:19:07.597 "params": { 00:19:07.597 "name": "nvme0", 00:19:07.597 "trtype": "TCP", 00:19:07.597 "adrfam": "IPv4", 00:19:07.597 "traddr": "10.0.0.2", 00:19:07.597 "trsvcid": "4420", 00:19:07.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.597 "prchk_reftag": false, 00:19:07.597 "prchk_guard": false, 00:19:07.597 "ctrlr_loss_timeout_sec": 0, 00:19:07.597 "reconnect_delay_sec": 0, 00:19:07.597 "fast_io_fail_timeout_sec": 0, 00:19:07.597 "psk": "key0", 00:19:07.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.597 "hdgst": false, 00:19:07.597 "ddgst": false, 00:19:07.597 "multipath": "multipath" 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_nvme_set_hotplug", 00:19:07.597 "params": { 00:19:07.597 "period_us": 100000, 00:19:07.597 "enable": false 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_enable_histogram", 00:19:07.597 "params": { 00:19:07.597 "name": "nvme0n1", 00:19:07.597 "enable": true 00:19:07.597 } 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "method": "bdev_wait_for_examine" 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }, 00:19:07.597 { 00:19:07.597 "subsystem": "nbd", 00:19:07.597 "config": [] 00:19:07.597 } 00:19:07.597 ] 00:19:07.597 }' 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3135905 ']' 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.597 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.598 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.598 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.598 15:00:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.598 [2024-12-11 15:00:00.478492] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:07.598 [2024-12-11 15:00:00.478542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3135905 ] 00:19:07.598 [2024-12-11 15:00:00.553850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.598 [2024-12-11 15:00:00.595352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.856 [2024-12-11 15:00:00.748723] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.423 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.423 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.423 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:08.423 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:08.682 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.682 15:00:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:08.682 Running I/O for 1 seconds... 00:19:09.618 5403.00 IOPS, 21.11 MiB/s 00:19:09.618 Latency(us) 00:19:09.618 [2024-12-11T14:00:02.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.618 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:09.618 Verification LBA range: start 0x0 length 0x2000 00:19:09.618 nvme0n1 : 1.01 5460.21 21.33 0.00 0.00 23284.44 5470.83 21313.45 00:19:09.618 [2024-12-11T14:00:02.666Z] =================================================================================================================== 00:19:09.618 [2024-12-11T14:00:02.666Z] Total : 5460.21 21.33 0.00 0.00 23284.44 5470.83 21313.45 00:19:09.618 { 00:19:09.618 "results": [ 00:19:09.618 { 00:19:09.618 "job": "nvme0n1", 00:19:09.618 "core_mask": "0x2", 00:19:09.618 "workload": "verify", 00:19:09.618 "status": "finished", 00:19:09.618 "verify_range": { 00:19:09.618 "start": 0, 00:19:09.618 "length": 8192 00:19:09.618 }, 00:19:09.618 "queue_depth": 128, 00:19:09.618 "io_size": 4096, 00:19:09.618 "runtime": 1.012964, 00:19:09.618 "iops": 5460.213788446578, 00:19:09.618 "mibps": 21.328960111119446, 00:19:09.618 "io_failed": 0, 00:19:09.618 "io_timeout": 0, 00:19:09.618 "avg_latency_us": 23284.439078396077, 00:19:09.618 "min_latency_us": 5470.8313043478265, 00:19:09.618 "max_latency_us": 21313.44695652174 00:19:09.618 } 00:19:09.618 ], 00:19:09.618 "core_count": 1 00:19:09.618 } 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:09.877 nvmf_trace.0 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3135905 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3135905 ']' 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3135905 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135905 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135905' 00:19:09.877 killing process with pid 3135905 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3135905 00:19:09.877 Received shutdown signal, test time was about 1.000000 seconds 00:19:09.877 00:19:09.877 Latency(us) 00:19:09.877 [2024-12-11T14:00:02.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.877 [2024-12-11T14:00:02.925Z] =================================================================================================================== 00:19:09.877 [2024-12-11T14:00:02.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.877 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3135905 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:10.136 15:00:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:10.136 rmmod nvme_tcp 00:19:10.136 rmmod nvme_fabrics 00:19:10.136 rmmod nvme_keyring 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3135836 ']' 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3135836 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3135836 ']' 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3135836 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3135836 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3135836' 00:19:10.136 killing process with pid 3135836 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3135836 00:19:10.136 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3135836 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:10.395 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:10.396 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:10.396 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:10.396 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.396 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.396 15:00:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.299 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:12.299 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.DDnr37sBiT /tmp/tmp.GDsoqErvnd /tmp/tmp.54Dc5b2lHL 00:19:12.299 00:19:12.299 real 1m19.685s 00:19:12.299 user 2m1.824s 00:19:12.299 sys 0m30.884s 00:19:12.299 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:12.299 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.299 ************************************ 00:19:12.299 END TEST nvmf_tls 00:19:12.299 ************************************ 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:12.558 ************************************ 00:19:12.558 START TEST nvmf_fips 00:19:12.558 ************************************ 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:12.558 * Looking for test storage... 00:19:12.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.558 --rc genhtml_branch_coverage=1 00:19:12.558 --rc genhtml_function_coverage=1 00:19:12.558 --rc genhtml_legend=1 00:19:12.558 --rc geninfo_all_blocks=1 00:19:12.558 --rc geninfo_unexecuted_blocks=1 00:19:12.558 00:19:12.558 ' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.558 --rc genhtml_branch_coverage=1 00:19:12.558 --rc genhtml_function_coverage=1 00:19:12.558 --rc genhtml_legend=1 00:19:12.558 --rc geninfo_all_blocks=1 00:19:12.558 --rc geninfo_unexecuted_blocks=1 00:19:12.558 00:19:12.558 ' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.558 --rc genhtml_branch_coverage=1 00:19:12.558 --rc genhtml_function_coverage=1 00:19:12.558 --rc genhtml_legend=1 00:19:12.558 --rc geninfo_all_blocks=1 00:19:12.558 --rc geninfo_unexecuted_blocks=1 00:19:12.558 00:19:12.558 ' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:12.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:12.558 --rc genhtml_branch_coverage=1 00:19:12.558 --rc genhtml_function_coverage=1 00:19:12.558 --rc genhtml_legend=1 00:19:12.558 --rc geninfo_all_blocks=1 00:19:12.558 --rc geninfo_unexecuted_blocks=1 00:19:12.558 00:19:12.558 ' 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:12.558 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:12.819 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:12.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:12.820 Error setting digest 00:19:12.820 40E2A92C317F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:12.820 40E2A92C317F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:12.820 15:00:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:19.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:19.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:19.387 Found net devices under 0000:86:00.0: cvl_0_0 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:19.387 Found net devices under 0000:86:00.1: cvl_0_1 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.387 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:19.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:19:19.388 00:19:19.388 --- 10.0.0.2 ping statistics --- 00:19:19.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.388 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:19:19.388 00:19:19.388 --- 10.0.0.1 ping statistics --- 00:19:19.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.388 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3140405 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3140405 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3140405 ']' 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.388 15:00:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.388 [2024-12-11 15:00:11.781777] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:19.388 [2024-12-11 15:00:11.781828] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.388 [2024-12-11 15:00:11.847027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.388 [2024-12-11 15:00:11.886634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.388 [2024-12-11 15:00:11.886665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.388 [2024-12-11 15:00:11.886672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.388 [2024-12-11 15:00:11.886678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.388 [2024-12-11 15:00:11.886684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.388 [2024-12-11 15:00:11.887270] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Vxh 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Vxh 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Vxh 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Vxh 00:19:19.646 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:19.904 [2024-12-11 15:00:12.814029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.904 [2024-12-11 15:00:12.830020] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:19.904 [2024-12-11 15:00:12.830233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.904 malloc0 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3140658 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3140658 /var/tmp/bdevperf.sock 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3140658 ']' 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.904 15:00:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:20.162 [2024-12-11 15:00:12.956310] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:20.162 [2024-12-11 15:00:12.956357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3140658 ] 00:19:20.162 [2024-12-11 15:00:13.029650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.162 [2024-12-11 15:00:13.071220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.162 15:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.162 15:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:20.162 15:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Vxh 00:19:20.419 15:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:20.676 [2024-12-11 15:00:13.548052] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:20.676 TLSTESTn1 00:19:20.676 15:00:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:20.934 Running I/O for 10 seconds... 00:19:22.798 5267.00 IOPS, 20.57 MiB/s [2024-12-11T14:00:16.779Z] 5396.50 IOPS, 21.08 MiB/s [2024-12-11T14:00:18.151Z] 5290.33 IOPS, 20.67 MiB/s [2024-12-11T14:00:19.084Z] 5198.25 IOPS, 20.31 MiB/s [2024-12-11T14:00:20.016Z] 5117.40 IOPS, 19.99 MiB/s [2024-12-11T14:00:20.960Z] 5068.83 IOPS, 19.80 MiB/s [2024-12-11T14:00:21.892Z] 5045.71 IOPS, 19.71 MiB/s [2024-12-11T14:00:22.825Z] 5013.75 IOPS, 19.58 MiB/s [2024-12-11T14:00:24.198Z] 4987.89 IOPS, 19.48 MiB/s [2024-12-11T14:00:24.198Z] 4957.60 IOPS, 19.37 MiB/s 00:19:31.150 Latency(us) 00:19:31.150 [2024-12-11T14:00:24.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.150 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.150 Verification LBA range: start 0x0 length 0x2000 00:19:31.150 TLSTESTn1 : 10.02 4961.93 19.38 0.00 0.00 25756.29 7208.96 30773.43 00:19:31.150 [2024-12-11T14:00:24.198Z] =================================================================================================================== 00:19:31.150 [2024-12-11T14:00:24.198Z] Total : 4961.93 19.38 0.00 0.00 25756.29 7208.96 30773.43 00:19:31.150 { 00:19:31.150 "results": [ 00:19:31.150 { 00:19:31.150 "job": "TLSTESTn1", 00:19:31.150 "core_mask": "0x4", 00:19:31.150 "workload": "verify", 00:19:31.150 "status": "finished", 00:19:31.150 "verify_range": { 00:19:31.150 "start": 0, 00:19:31.150 "length": 8192 00:19:31.150 }, 00:19:31.150 "queue_depth": 128, 00:19:31.150 "io_size": 4096, 00:19:31.150 "runtime": 10.017066, 00:19:31.150 "iops": 4961.931966905279, 00:19:31.150 "mibps": 19.382546745723747, 00:19:31.150 "io_failed": 0, 00:19:31.150 "io_timeout": 0, 00:19:31.150 "avg_latency_us": 25756.289434075814, 00:19:31.150 "min_latency_us": 7208.96, 00:19:31.150 "max_latency_us": 30773.426086956522 00:19:31.150 } 00:19:31.150 ], 00:19:31.150 "core_count": 1 00:19:31.150 } 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:31.150 nvmf_trace.0 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3140658 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3140658 ']' 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3140658 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140658 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140658' 00:19:31.150 killing process with pid 3140658 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3140658 00:19:31.150 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.150 00:19:31.150 Latency(us) 00:19:31.150 [2024-12-11T14:00:24.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.150 [2024-12-11T14:00:24.198Z] =================================================================================================================== 00:19:31.150 [2024-12-11T14:00:24.198Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.150 15:00:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3140658 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:31.150 rmmod nvme_tcp 00:19:31.150 rmmod nvme_fabrics 00:19:31.150 rmmod nvme_keyring 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3140405 ']' 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3140405 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3140405 ']' 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3140405 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.150 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3140405 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3140405' 00:19:31.456 killing process with pid 3140405 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3140405 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3140405 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:31.456 15:00:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Vxh 00:19:34.033 00:19:34.033 real 0m21.058s 00:19:34.033 user 0m21.312s 00:19:34.033 sys 0m10.415s 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:34.033 ************************************ 00:19:34.033 END TEST nvmf_fips 00:19:34.033 ************************************ 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.033 15:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.033 ************************************ 00:19:34.033 START TEST nvmf_control_msg_list 00:19:34.034 ************************************ 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:34.034 * Looking for test storage... 00:19:34.034 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.034 --rc genhtml_branch_coverage=1 00:19:34.034 --rc genhtml_function_coverage=1 00:19:34.034 --rc genhtml_legend=1 00:19:34.034 --rc geninfo_all_blocks=1 00:19:34.034 --rc geninfo_unexecuted_blocks=1 00:19:34.034 00:19:34.034 ' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.034 --rc genhtml_branch_coverage=1 00:19:34.034 --rc genhtml_function_coverage=1 00:19:34.034 --rc genhtml_legend=1 00:19:34.034 --rc geninfo_all_blocks=1 00:19:34.034 --rc geninfo_unexecuted_blocks=1 00:19:34.034 00:19:34.034 ' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.034 --rc genhtml_branch_coverage=1 00:19:34.034 --rc genhtml_function_coverage=1 00:19:34.034 --rc genhtml_legend=1 00:19:34.034 --rc geninfo_all_blocks=1 00:19:34.034 --rc geninfo_unexecuted_blocks=1 00:19:34.034 00:19:34.034 ' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:34.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.034 --rc genhtml_branch_coverage=1 00:19:34.034 --rc genhtml_function_coverage=1 00:19:34.034 --rc genhtml_legend=1 00:19:34.034 --rc geninfo_all_blocks=1 00:19:34.034 --rc geninfo_unexecuted_blocks=1 00:19:34.034 00:19:34.034 ' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.034 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.035 15:00:26 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:39.307 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:39.566 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:39.566 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.566 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:39.567 Found net devices under 0000:86:00.0: cvl_0_0 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:39.567 Found net devices under 0000:86:00.1: cvl_0_1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:39.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:19:39.567 00:19:39.567 --- 10.0.0.2 ping statistics --- 00:19:39.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.567 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:19:39.567 00:19:39.567 --- 10.0.0.1 ping statistics --- 00:19:39.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.567 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.567 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3146024 00:19:39.825 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3146024 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3146024 ']' 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.826 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:39.826 [2024-12-11 15:00:32.711785] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:39.826 [2024-12-11 15:00:32.711837] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.826 [2024-12-11 15:00:32.791688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.826 [2024-12-11 15:00:32.831729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.826 [2024-12-11 15:00:32.831765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.826 [2024-12-11 15:00:32.831772] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.826 [2024-12-11 15:00:32.831778] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.826 [2024-12-11 15:00:32.831783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.826 [2024-12-11 15:00:32.832335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 [2024-12-11 15:00:32.969495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 Malloc0 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.084 15:00:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:40.084 [2024-12-11 15:00:33.009858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3146044 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3146045 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3146046 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:40.084 15:00:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3146044 00:19:40.084 [2024-12-11 15:00:33.088280] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.084 [2024-12-11 15:00:33.108285] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:40.084 [2024-12-11 15:00:33.108430] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:41.458 Initializing NVMe Controllers 00:19:41.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:41.458 Initialization complete. Launching workers. 00:19:41.458 ======================================================== 00:19:41.458 Latency(us) 00:19:41.458 Device Information : IOPS MiB/s Average min max 00:19:41.458 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 7444.00 29.08 134.00 124.73 463.08 00:19:41.458 ======================================================== 00:19:41.458 Total : 7444.00 29.08 134.00 124.73 463.08 00:19:41.458 00:19:41.458 Initializing NVMe Controllers 00:19:41.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:41.458 Initialization complete. Launching workers. 00:19:41.458 ======================================================== 00:19:41.458 Latency(us) 00:19:41.458 Device Information : IOPS MiB/s Average min max 00:19:41.458 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 47.00 0.18 21981.28 160.91 41909.18 00:19:41.458 ======================================================== 00:19:41.458 Total : 47.00 0.18 21981.28 160.91 41909.18 00:19:41.458 00:19:41.458 Initializing NVMe Controllers 00:19:41.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:41.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:41.458 Initialization complete. Launching workers. 00:19:41.458 ======================================================== 00:19:41.458 Latency(us) 00:19:41.458 Device Information : IOPS MiB/s Average min max 00:19:41.458 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 40.00 0.16 25804.04 156.91 41952.45 00:19:41.458 ======================================================== 00:19:41.458 Total : 40.00 0.16 25804.04 156.91 41952.45 00:19:41.458 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3146045 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3146046 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.458 rmmod nvme_tcp 00:19:41.458 rmmod nvme_fabrics 00:19:41.458 rmmod nvme_keyring 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:41.458 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3146024 ']' 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3146024 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3146024 ']' 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3146024 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3146024 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3146024' 00:19:41.459 killing process with pid 3146024 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3146024 00:19:41.459 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3146024 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.717 15:00:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.250 00:19:44.250 real 0m10.166s 00:19:44.250 user 0m6.790s 00:19:44.250 sys 0m5.478s 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:44.250 ************************************ 00:19:44.250 END TEST nvmf_control_msg_list 00:19:44.250 ************************************ 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.250 ************************************ 00:19:44.250 START TEST nvmf_wait_for_buf 00:19:44.250 ************************************ 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:44.250 * Looking for test storage... 00:19:44.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.250 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.251 --rc genhtml_branch_coverage=1 00:19:44.251 --rc genhtml_function_coverage=1 00:19:44.251 --rc genhtml_legend=1 00:19:44.251 --rc geninfo_all_blocks=1 00:19:44.251 --rc geninfo_unexecuted_blocks=1 00:19:44.251 00:19:44.251 ' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.251 --rc genhtml_branch_coverage=1 00:19:44.251 --rc genhtml_function_coverage=1 00:19:44.251 --rc genhtml_legend=1 00:19:44.251 --rc geninfo_all_blocks=1 00:19:44.251 --rc geninfo_unexecuted_blocks=1 00:19:44.251 00:19:44.251 ' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.251 --rc genhtml_branch_coverage=1 00:19:44.251 --rc genhtml_function_coverage=1 00:19:44.251 --rc genhtml_legend=1 00:19:44.251 --rc geninfo_all_blocks=1 00:19:44.251 --rc geninfo_unexecuted_blocks=1 00:19:44.251 00:19:44.251 ' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:44.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.251 --rc genhtml_branch_coverage=1 00:19:44.251 --rc genhtml_function_coverage=1 00:19:44.251 --rc genhtml_legend=1 00:19:44.251 --rc geninfo_all_blocks=1 00:19:44.251 --rc geninfo_unexecuted_blocks=1 00:19:44.251 00:19:44.251 ' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.251 15:00:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.251 15:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:44.251 15:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:44.251 15:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.251 15:00:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:50.819 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:50.819 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:50.819 Found net devices under 0000:86:00.0: cvl_0_0 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:50.819 Found net devices under 0000:86:00.1: cvl_0_1 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:50.819 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:50.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:19:50.820 00:19:50.820 --- 10.0.0.2 ping statistics --- 00:19:50.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.820 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:19:50.820 00:19:50.820 --- 10.0.0.1 ping statistics --- 00:19:50.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.820 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3149810 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3149810 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3149810 ']' 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.820 15:00:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.820 [2024-12-11 15:00:42.949678] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:19:50.820 [2024-12-11 15:00:42.949720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.820 [2024-12-11 15:00:43.032734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.820 [2024-12-11 15:00:43.073253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.820 [2024-12-11 15:00:43.073287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.820 [2024-12-11 15:00:43.073297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.820 [2024-12-11 15:00:43.073303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.820 [2024-12-11 15:00:43.073308] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.820 [2024-12-11 15:00:43.073858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.820 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 Malloc0 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 [2024-12-11 15:00:43.918440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:51.079 [2024-12-11 15:00:43.946635] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.079 15:00:43 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:51.079 [2024-12-11 15:00:44.032699] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:52.453 Initializing NVMe Controllers 00:19:52.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:52.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:52.453 Initialization complete. Launching workers. 00:19:52.453 ======================================================== 00:19:52.453 Latency(us) 00:19:52.453 Device Information : IOPS MiB/s Average min max 00:19:52.453 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32238.74 7282.18 63855.50 00:19:52.453 ======================================================== 00:19:52.453 Total : 129.00 16.12 32238.74 7282.18 63855.50 00:19:52.453 00:19:52.453 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:52.453 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:52.453 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.453 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:52.453 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:52.712 rmmod nvme_tcp 00:19:52.712 rmmod nvme_fabrics 00:19:52.712 rmmod nvme_keyring 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3149810 ']' 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3149810 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3149810 ']' 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3149810 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3149810 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3149810' 00:19:52.712 killing process with pid 3149810 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3149810 00:19:52.712 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3149810 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:52.971 15:00:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:54.876 00:19:54.876 real 0m11.086s 00:19:54.876 user 0m4.856s 00:19:54.876 sys 0m4.846s 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:54.876 ************************************ 00:19:54.876 END TEST nvmf_wait_for_buf 00:19:54.876 ************************************ 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:54.876 15:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:01.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:01.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:01.447 Found net devices under 0000:86:00.0: cvl_0_0 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:01.447 Found net devices under 0000:86:00.1: cvl_0_1 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.447 ************************************ 00:20:01.447 START TEST nvmf_perf_adq 00:20:01.447 ************************************ 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:01.447 * Looking for test storage... 00:20:01.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.447 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.448 --rc genhtml_branch_coverage=1 00:20:01.448 --rc genhtml_function_coverage=1 00:20:01.448 --rc genhtml_legend=1 00:20:01.448 --rc geninfo_all_blocks=1 00:20:01.448 --rc geninfo_unexecuted_blocks=1 00:20:01.448 00:20:01.448 ' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.448 --rc genhtml_branch_coverage=1 00:20:01.448 --rc genhtml_function_coverage=1 00:20:01.448 --rc genhtml_legend=1 00:20:01.448 --rc geninfo_all_blocks=1 00:20:01.448 --rc geninfo_unexecuted_blocks=1 00:20:01.448 00:20:01.448 ' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.448 --rc genhtml_branch_coverage=1 00:20:01.448 --rc genhtml_function_coverage=1 00:20:01.448 --rc genhtml_legend=1 00:20:01.448 --rc geninfo_all_blocks=1 00:20:01.448 --rc geninfo_unexecuted_blocks=1 00:20:01.448 00:20:01.448 ' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:01.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.448 --rc genhtml_branch_coverage=1 00:20:01.448 --rc genhtml_function_coverage=1 00:20:01.448 --rc genhtml_legend=1 00:20:01.448 --rc geninfo_all_blocks=1 00:20:01.448 --rc geninfo_unexecuted_blocks=1 00:20:01.448 00:20:01.448 ' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.448 15:00:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.721 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.721 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.721 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.721 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:06.721 15:00:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:07.657 15:01:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:10.190 15:01:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:15.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:15.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:15.461 Found net devices under 0000:86:00.0: cvl_0_0 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:15.461 Found net devices under 0000:86:00.1: cvl_0_1 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:15.461 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:15.462 15:01:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:15.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:15.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:20:15.462 00:20:15.462 --- 10.0.0.2 ping statistics --- 00:20:15.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.462 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:15.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:15.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:15.462 00:20:15.462 --- 10.0.0.1 ping statistics --- 00:20:15.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:15.462 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3158367 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3158367 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3158367 ']' 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:15.462 15:01:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:15.462 [2024-12-11 15:01:08.242983] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:15.462 [2024-12-11 15:01:08.243027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.462 [2024-12-11 15:01:08.322903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:15.462 [2024-12-11 15:01:08.365007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.462 [2024-12-11 15:01:08.365044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.462 [2024-12-11 15:01:08.365052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.462 [2024-12-11 15:01:08.365059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.462 [2024-12-11 15:01:08.365064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.462 [2024-12-11 15:01:08.366592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.462 [2024-12-11 15:01:08.366702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:15.462 [2024-12-11 15:01:08.366807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.462 [2024-12-11 15:01:08.366808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 [2024-12-11 15:01:09.255511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 Malloc1 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.395 [2024-12-11 15:01:09.314820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3158506 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:16.395 15:01:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:18.292 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:18.292 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.292 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:18.550 "tick_rate": 2300000000, 00:20:18.550 "poll_groups": [ 00:20:18.550 { 00:20:18.550 "name": "nvmf_tgt_poll_group_000", 00:20:18.550 "admin_qpairs": 1, 00:20:18.550 "io_qpairs": 1, 00:20:18.550 "current_admin_qpairs": 1, 00:20:18.550 "current_io_qpairs": 1, 00:20:18.550 "pending_bdev_io": 0, 00:20:18.550 "completed_nvme_io": 19721, 00:20:18.550 "transports": [ 00:20:18.550 { 00:20:18.550 "trtype": "TCP" 00:20:18.550 } 00:20:18.550 ] 00:20:18.550 }, 00:20:18.550 { 00:20:18.550 "name": "nvmf_tgt_poll_group_001", 00:20:18.550 "admin_qpairs": 0, 00:20:18.550 "io_qpairs": 1, 00:20:18.550 "current_admin_qpairs": 0, 00:20:18.550 "current_io_qpairs": 1, 00:20:18.550 "pending_bdev_io": 0, 00:20:18.550 "completed_nvme_io": 20150, 00:20:18.550 "transports": [ 00:20:18.550 { 00:20:18.550 "trtype": "TCP" 00:20:18.550 } 00:20:18.550 ] 00:20:18.550 }, 00:20:18.550 { 00:20:18.550 "name": "nvmf_tgt_poll_group_002", 00:20:18.550 "admin_qpairs": 0, 00:20:18.550 "io_qpairs": 1, 00:20:18.550 "current_admin_qpairs": 0, 00:20:18.550 "current_io_qpairs": 1, 00:20:18.550 "pending_bdev_io": 0, 00:20:18.550 "completed_nvme_io": 19987, 00:20:18.550 "transports": [ 00:20:18.550 { 00:20:18.550 "trtype": "TCP" 00:20:18.550 } 00:20:18.550 ] 00:20:18.550 }, 00:20:18.550 { 00:20:18.550 "name": "nvmf_tgt_poll_group_003", 00:20:18.550 "admin_qpairs": 0, 00:20:18.550 "io_qpairs": 1, 00:20:18.550 "current_admin_qpairs": 0, 00:20:18.550 "current_io_qpairs": 1, 00:20:18.550 "pending_bdev_io": 0, 00:20:18.550 "completed_nvme_io": 19527, 00:20:18.550 "transports": [ 00:20:18.550 { 00:20:18.550 "trtype": "TCP" 00:20:18.550 } 00:20:18.550 ] 00:20:18.550 } 00:20:18.550 ] 00:20:18.550 }' 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:18.550 15:01:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3158506 00:20:26.652 Initializing NVMe Controllers 00:20:26.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:26.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:26.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:26.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:26.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:26.652 Initialization complete. Launching workers. 00:20:26.652 ======================================================== 00:20:26.652 Latency(us) 00:20:26.652 Device Information : IOPS MiB/s Average min max 00:20:26.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10662.20 41.65 6003.04 1923.51 10218.32 00:20:26.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10721.30 41.88 5969.14 2330.69 9941.77 00:20:26.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10410.50 40.67 6146.77 2244.22 10632.47 00:20:26.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10539.70 41.17 6072.76 2200.49 10357.31 00:20:26.652 ======================================================== 00:20:26.652 Total : 42333.69 165.37 6047.16 1923.51 10632.47 00:20:26.652 00:20:26.652 [2024-12-11 15:01:19.480834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf18360 is same with the state(6) to be set 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.652 rmmod nvme_tcp 00:20:26.652 rmmod nvme_fabrics 00:20:26.652 rmmod nvme_keyring 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3158367 ']' 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3158367 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3158367 ']' 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3158367 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3158367 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3158367' 00:20:26.652 killing process with pid 3158367 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3158367 00:20:26.652 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3158367 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.911 15:01:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.815 15:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:29.073 15:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:29.073 15:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:29.073 15:01:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:30.009 15:01:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:32.545 15:01:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.820 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.820 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.820 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.820 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.820 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:20:37.821 00:20:37.821 --- 10.0.0.2 ping statistics --- 00:20:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.821 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:20:37.821 00:20:37.821 --- 10.0.0.1 ping statistics --- 00:20:37.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.821 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:37.821 net.core.busy_poll = 1 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:37.821 net.core.busy_read = 1 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3162310 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3162310 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3162310 ']' 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.821 15:01:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.079 [2024-12-11 15:01:30.882894] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:38.079 [2024-12-11 15:01:30.882937] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.079 [2024-12-11 15:01:30.960447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.079 [2024-12-11 15:01:31.002941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.079 [2024-12-11 15:01:31.002979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.079 [2024-12-11 15:01:31.002986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.079 [2024-12-11 15:01:31.002994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.079 [2024-12-11 15:01:31.002999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.079 [2024-12-11 15:01:31.004428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.079 [2024-12-11 15:01:31.004539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.079 [2024-12-11 15:01:31.004644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.079 [2024-12-11 15:01:31.004645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.079 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 [2024-12-11 15:01:31.223404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 Malloc1 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.337 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.338 [2024-12-11 15:01:31.283132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.338 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.338 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3162426 00:20:38.338 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:38.338 15:01:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:40.859 "tick_rate": 2300000000, 00:20:40.859 "poll_groups": [ 00:20:40.859 { 00:20:40.859 "name": "nvmf_tgt_poll_group_000", 00:20:40.859 "admin_qpairs": 1, 00:20:40.859 "io_qpairs": 4, 00:20:40.859 "current_admin_qpairs": 1, 00:20:40.859 "current_io_qpairs": 4, 00:20:40.859 "pending_bdev_io": 0, 00:20:40.859 "completed_nvme_io": 42866, 00:20:40.859 "transports": [ 00:20:40.859 { 00:20:40.859 "trtype": "TCP" 00:20:40.859 } 00:20:40.859 ] 00:20:40.859 }, 00:20:40.859 { 00:20:40.859 "name": "nvmf_tgt_poll_group_001", 00:20:40.859 "admin_qpairs": 0, 00:20:40.859 "io_qpairs": 0, 00:20:40.859 "current_admin_qpairs": 0, 00:20:40.859 "current_io_qpairs": 0, 00:20:40.859 "pending_bdev_io": 0, 00:20:40.859 "completed_nvme_io": 0, 00:20:40.859 "transports": [ 00:20:40.859 { 00:20:40.859 "trtype": "TCP" 00:20:40.859 } 00:20:40.859 ] 00:20:40.859 }, 00:20:40.859 { 00:20:40.859 "name": "nvmf_tgt_poll_group_002", 00:20:40.859 "admin_qpairs": 0, 00:20:40.859 "io_qpairs": 0, 00:20:40.859 "current_admin_qpairs": 0, 00:20:40.859 "current_io_qpairs": 0, 00:20:40.859 "pending_bdev_io": 0, 00:20:40.859 "completed_nvme_io": 0, 00:20:40.859 "transports": [ 00:20:40.859 { 00:20:40.859 "trtype": "TCP" 00:20:40.859 } 00:20:40.859 ] 00:20:40.859 }, 00:20:40.859 { 00:20:40.859 "name": "nvmf_tgt_poll_group_003", 00:20:40.859 "admin_qpairs": 0, 00:20:40.859 "io_qpairs": 0, 00:20:40.859 "current_admin_qpairs": 0, 00:20:40.859 "current_io_qpairs": 0, 00:20:40.859 "pending_bdev_io": 0, 00:20:40.859 "completed_nvme_io": 0, 00:20:40.859 "transports": [ 00:20:40.859 { 00:20:40.859 "trtype": "TCP" 00:20:40.859 } 00:20:40.859 ] 00:20:40.859 } 00:20:40.859 ] 00:20:40.859 }' 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:20:40.859 15:01:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3162426 00:20:49.096 Initializing NVMe Controllers 00:20:49.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:49.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:49.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:49.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:49.096 Initialization complete. Launching workers. 00:20:49.096 ======================================================== 00:20:49.097 Latency(us) 00:20:49.097 Device Information : IOPS MiB/s Average min max 00:20:49.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5522.10 21.57 11602.44 1497.46 57762.62 00:20:49.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5689.30 22.22 11249.18 1450.98 54436.39 00:20:49.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5717.20 22.33 11196.79 1227.77 55654.30 00:20:49.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5878.90 22.96 10920.94 1522.68 56544.93 00:20:49.097 ======================================================== 00:20:49.097 Total : 22807.50 89.09 11236.97 1227.77 57762.62 00:20:49.097 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.097 rmmod nvme_tcp 00:20:49.097 rmmod nvme_fabrics 00:20:49.097 rmmod nvme_keyring 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3162310 ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3162310 ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3162310' 00:20:49.097 killing process with pid 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3162310 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.097 15:01:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:51.004 00:20:51.004 real 0m50.302s 00:20:51.004 user 2m47.100s 00:20:51.004 sys 0m10.085s 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:51.004 ************************************ 00:20:51.004 END TEST nvmf_perf_adq 00:20:51.004 ************************************ 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:51.004 ************************************ 00:20:51.004 START TEST nvmf_shutdown 00:20:51.004 ************************************ 00:20:51.004 15:01:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:51.004 * Looking for test storage... 00:20:51.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:51.004 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:51.004 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:51.004 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:51.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.264 --rc genhtml_branch_coverage=1 00:20:51.264 --rc genhtml_function_coverage=1 00:20:51.264 --rc genhtml_legend=1 00:20:51.264 --rc geninfo_all_blocks=1 00:20:51.264 --rc geninfo_unexecuted_blocks=1 00:20:51.264 00:20:51.264 ' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:51.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.264 --rc genhtml_branch_coverage=1 00:20:51.264 --rc genhtml_function_coverage=1 00:20:51.264 --rc genhtml_legend=1 00:20:51.264 --rc geninfo_all_blocks=1 00:20:51.264 --rc geninfo_unexecuted_blocks=1 00:20:51.264 00:20:51.264 ' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:51.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.264 --rc genhtml_branch_coverage=1 00:20:51.264 --rc genhtml_function_coverage=1 00:20:51.264 --rc genhtml_legend=1 00:20:51.264 --rc geninfo_all_blocks=1 00:20:51.264 --rc geninfo_unexecuted_blocks=1 00:20:51.264 00:20:51.264 ' 00:20:51.264 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:51.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.264 --rc genhtml_branch_coverage=1 00:20:51.264 --rc genhtml_function_coverage=1 00:20:51.264 --rc genhtml_legend=1 00:20:51.265 --rc geninfo_all_blocks=1 00:20:51.265 --rc geninfo_unexecuted_blocks=1 00:20:51.265 00:20:51.265 ' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:51.265 ************************************ 00:20:51.265 START TEST nvmf_shutdown_tc1 00:20:51.265 ************************************ 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:51.265 15:01:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:57.847 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:57.848 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:57.848 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:57.848 Found net devices under 0000:86:00.0: cvl_0_0 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:57.848 Found net devices under 0000:86:00.1: cvl_0_1 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.848 15:01:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:57.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:20:57.848 00:20:57.848 --- 10.0.0.2 ping statistics --- 00:20:57.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.848 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:20:57.848 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:20:57.848 00:20:57.848 --- 10.0.0.1 ping statistics --- 00:20:57.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.848 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3167660 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3167660 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3167660 ']' 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 [2024-12-11 15:01:50.317673] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:57.849 [2024-12-11 15:01:50.317719] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.849 [2024-12-11 15:01:50.396076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:57.849 [2024-12-11 15:01:50.435320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.849 [2024-12-11 15:01:50.435361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.849 [2024-12-11 15:01:50.435369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.849 [2024-12-11 15:01:50.435378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.849 [2024-12-11 15:01:50.435383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.849 [2024-12-11 15:01:50.437020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:57.849 [2024-12-11 15:01:50.437107] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:20:57.849 [2024-12-11 15:01:50.437197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.849 [2024-12-11 15:01:50.437198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 [2024-12-11 15:01:50.586818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.849 15:01:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:57.849 Malloc1 00:20:57.849 [2024-12-11 15:01:50.701624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.849 Malloc2 00:20:57.849 Malloc3 00:20:57.849 Malloc4 00:20:57.849 Malloc5 00:20:57.849 Malloc6 00:20:58.106 Malloc7 00:20:58.106 Malloc8 00:20:58.106 Malloc9 00:20:58.106 Malloc10 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3167924 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3167924 /var/tmp/bdevperf.sock 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3167924 ']' 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.106 { 00:20:58.106 "params": { 00:20:58.106 "name": "Nvme$subsystem", 00:20:58.106 "trtype": "$TEST_TRANSPORT", 00:20:58.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.106 "adrfam": "ipv4", 00:20:58.106 "trsvcid": "$NVMF_PORT", 00:20:58.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.106 "hdgst": ${hdgst:-false}, 00:20:58.106 "ddgst": ${ddgst:-false} 00:20:58.106 }, 00:20:58.106 "method": "bdev_nvme_attach_controller" 00:20:58.106 } 00:20:58.106 EOF 00:20:58.106 )") 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.106 { 00:20:58.106 "params": { 00:20:58.106 "name": "Nvme$subsystem", 00:20:58.106 "trtype": "$TEST_TRANSPORT", 00:20:58.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.106 "adrfam": "ipv4", 00:20:58.106 "trsvcid": "$NVMF_PORT", 00:20:58.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.106 "hdgst": ${hdgst:-false}, 00:20:58.106 "ddgst": ${ddgst:-false} 00:20:58.106 }, 00:20:58.106 "method": "bdev_nvme_attach_controller" 00:20:58.106 } 00:20:58.106 EOF 00:20:58.106 )") 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.106 { 00:20:58.106 "params": { 00:20:58.106 "name": "Nvme$subsystem", 00:20:58.106 "trtype": "$TEST_TRANSPORT", 00:20:58.106 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.106 "adrfam": "ipv4", 00:20:58.106 "trsvcid": "$NVMF_PORT", 00:20:58.106 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.106 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.106 "hdgst": ${hdgst:-false}, 00:20:58.106 "ddgst": ${ddgst:-false} 00:20:58.106 }, 00:20:58.106 "method": "bdev_nvme_attach_controller" 00:20:58.106 } 00:20:58.106 EOF 00:20:58.106 )") 00:20:58.106 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 [2024-12-11 15:01:51.179721] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:20:58.364 [2024-12-11 15:01:51.179769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:58.364 { 00:20:58.364 "params": { 00:20:58.364 "name": "Nvme$subsystem", 00:20:58.364 "trtype": "$TEST_TRANSPORT", 00:20:58.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:58.364 "adrfam": "ipv4", 00:20:58.364 "trsvcid": "$NVMF_PORT", 00:20:58.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:58.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:58.364 "hdgst": ${hdgst:-false}, 00:20:58.364 "ddgst": ${ddgst:-false} 00:20:58.364 }, 00:20:58.364 "method": "bdev_nvme_attach_controller" 00:20:58.364 } 00:20:58.364 EOF 00:20:58.364 )") 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:58.364 15:01:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme1", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme2", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme3", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme4", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme5", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme6", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme7", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme8", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme9", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 },{ 00:20:58.365 "params": { 00:20:58.365 "name": "Nvme10", 00:20:58.365 "trtype": "tcp", 00:20:58.365 "traddr": "10.0.0.2", 00:20:58.365 "adrfam": "ipv4", 00:20:58.365 "trsvcid": "4420", 00:20:58.365 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:58.365 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:58.365 "hdgst": false, 00:20:58.365 "ddgst": false 00:20:58.365 }, 00:20:58.365 "method": "bdev_nvme_attach_controller" 00:20:58.365 }' 00:20:58.365 [2024-12-11 15:01:51.256467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.365 [2024-12-11 15:01:51.297323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3167924 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:00.259 15:01:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:01.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh: line 74: 3167924 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3167660 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 [2024-12-11 15:01:54.107208] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:01.192 [2024-12-11 15:01:54.107259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3168416 ] 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.192 "ddgst": ${ddgst:-false} 00:21:01.192 }, 00:21:01.192 "method": "bdev_nvme_attach_controller" 00:21:01.192 } 00:21:01.192 EOF 00:21:01.192 )") 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:01.192 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:01.192 { 00:21:01.192 "params": { 00:21:01.192 "name": "Nvme$subsystem", 00:21:01.192 "trtype": "$TEST_TRANSPORT", 00:21:01.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:01.192 "adrfam": "ipv4", 00:21:01.192 "trsvcid": "$NVMF_PORT", 00:21:01.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:01.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:01.192 "hdgst": ${hdgst:-false}, 00:21:01.193 "ddgst": ${ddgst:-false} 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 } 00:21:01.193 EOF 00:21:01.193 )") 00:21:01.193 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:01.193 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:01.193 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:01.193 15:01:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme1", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme2", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme3", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme4", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme5", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme6", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme7", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme8", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme9", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 },{ 00:21:01.193 "params": { 00:21:01.193 "name": "Nvme10", 00:21:01.193 "trtype": "tcp", 00:21:01.193 "traddr": "10.0.0.2", 00:21:01.193 "adrfam": "ipv4", 00:21:01.193 "trsvcid": "4420", 00:21:01.193 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:01.193 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:01.193 "hdgst": false, 00:21:01.193 "ddgst": false 00:21:01.193 }, 00:21:01.193 "method": "bdev_nvme_attach_controller" 00:21:01.193 }' 00:21:01.193 [2024-12-11 15:01:54.186468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.193 [2024-12-11 15:01:54.227062] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.575 Running I/O for 1 seconds... 00:21:03.765 2196.00 IOPS, 137.25 MiB/s 00:21:03.765 Latency(us) 00:21:03.765 [2024-12-11T14:01:56.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme1n1 : 1.09 241.16 15.07 0.00 0.00 260735.51 6097.70 208803.39 00:21:03.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme2n1 : 1.08 236.41 14.78 0.00 0.00 263941.79 16754.42 223392.28 00:21:03.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme3n1 : 1.15 278.86 17.43 0.00 0.00 221065.57 16070.57 223392.28 00:21:03.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme4n1 : 1.08 299.78 18.74 0.00 0.00 201293.19 9346.00 216097.84 00:21:03.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme5n1 : 1.15 282.38 17.65 0.00 0.00 210342.94 9687.93 221568.67 00:21:03.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme6n1 : 1.16 277.06 17.32 0.00 0.00 213045.60 20059.71 217921.45 00:21:03.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme7n1 : 1.14 289.09 18.07 0.00 0.00 199589.07 3875.17 217009.64 00:21:03.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme8n1 : 1.15 281.23 17.58 0.00 0.00 203135.30 2621.44 229774.91 00:21:03.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme9n1 : 1.20 266.30 16.64 0.00 0.00 205480.92 14075.99 226127.69 00:21:03.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:03.765 Verification LBA range: start 0x0 length 0x400 00:21:03.765 Nvme10n1 : 1.16 275.36 17.21 0.00 0.00 201792.82 17666.23 240716.58 00:21:03.765 [2024-12-11T14:01:56.813Z] =================================================================================================================== 00:21:03.765 [2024-12-11T14:01:56.813Z] Total : 2727.62 170.48 0.00 0.00 216178.90 2621.44 240716.58 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:03.765 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:03.765 rmmod nvme_tcp 00:21:03.765 rmmod nvme_fabrics 00:21:04.023 rmmod nvme_keyring 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3167660 ']' 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3167660 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3167660 ']' 00:21:04.023 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3167660 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3167660 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3167660' 00:21:04.024 killing process with pid 3167660 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3167660 00:21:04.024 15:01:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3167660 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.283 15:01:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:06.821 00:21:06.821 real 0m15.155s 00:21:06.821 user 0m32.940s 00:21:06.821 sys 0m5.824s 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:06.821 ************************************ 00:21:06.821 END TEST nvmf_shutdown_tc1 00:21:06.821 ************************************ 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:06.821 ************************************ 00:21:06.821 START TEST nvmf_shutdown_tc2 00:21:06.821 ************************************ 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:06.821 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:06.821 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:06.821 Found net devices under 0000:86:00.0: cvl_0_0 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:06.821 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:06.822 Found net devices under 0000:86:00.1: cvl_0_1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:06.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:21:06.822 00:21:06.822 --- 10.0.0.2 ping statistics --- 00:21:06.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.822 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:06.822 00:21:06.822 --- 10.0.0.1 ping statistics --- 00:21:06.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.822 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3169441 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3169441 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3169441 ']' 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:06.822 15:01:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:06.822 [2024-12-11 15:01:59.768357] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:06.822 [2024-12-11 15:01:59.768401] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.822 [2024-12-11 15:01:59.849991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:07.081 [2024-12-11 15:01:59.892276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.081 [2024-12-11 15:01:59.892309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.081 [2024-12-11 15:01:59.892316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.081 [2024-12-11 15:01:59.892322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.081 [2024-12-11 15:01:59.892328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.081 [2024-12-11 15:01:59.893934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.081 [2024-12-11 15:01:59.894041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.081 [2024-12-11 15:01:59.894145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.081 [2024-12-11 15:01:59.894146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.646 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.647 [2024-12-11 15:02:00.658439] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.647 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.905 15:02:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:07.905 Malloc1 00:21:07.905 [2024-12-11 15:02:00.775447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.905 Malloc2 00:21:07.905 Malloc3 00:21:07.905 Malloc4 00:21:07.905 Malloc5 00:21:08.163 Malloc6 00:21:08.163 Malloc7 00:21:08.163 Malloc8 00:21:08.163 Malloc9 00:21:08.163 Malloc10 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3169721 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3169721 /var/tmp/bdevperf.sock 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3169721 ']' 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:08.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.163 { 00:21:08.163 "params": { 00:21:08.163 "name": "Nvme$subsystem", 00:21:08.163 "trtype": "$TEST_TRANSPORT", 00:21:08.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.163 "adrfam": "ipv4", 00:21:08.163 "trsvcid": "$NVMF_PORT", 00:21:08.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.163 "hdgst": ${hdgst:-false}, 00:21:08.163 "ddgst": ${ddgst:-false} 00:21:08.163 }, 00:21:08.163 "method": "bdev_nvme_attach_controller" 00:21:08.163 } 00:21:08.163 EOF 00:21:08.163 )") 00:21:08.163 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.421 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.421 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.421 { 00:21:08.421 "params": { 00:21:08.421 "name": "Nvme$subsystem", 00:21:08.421 "trtype": "$TEST_TRANSPORT", 00:21:08.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.421 "adrfam": "ipv4", 00:21:08.421 "trsvcid": "$NVMF_PORT", 00:21:08.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.421 "hdgst": ${hdgst:-false}, 00:21:08.421 "ddgst": ${ddgst:-false} 00:21:08.421 }, 00:21:08.421 "method": "bdev_nvme_attach_controller" 00:21:08.421 } 00:21:08.421 EOF 00:21:08.421 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 [2024-12-11 15:02:01.248569] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:08.422 [2024-12-11 15:02:01.248618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3169721 ] 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:08.422 { 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme$subsystem", 00:21:08.422 "trtype": "$TEST_TRANSPORT", 00:21:08.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "$NVMF_PORT", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:08.422 "hdgst": ${hdgst:-false}, 00:21:08.422 "ddgst": ${ddgst:-false} 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 } 00:21:08.422 EOF 00:21:08.422 )") 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:08.422 15:02:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme1", 00:21:08.422 "trtype": "tcp", 00:21:08.422 "traddr": "10.0.0.2", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "4420", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:08.422 "hdgst": false, 00:21:08.422 "ddgst": false 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 },{ 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme2", 00:21:08.422 "trtype": "tcp", 00:21:08.422 "traddr": "10.0.0.2", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "4420", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:08.422 "hdgst": false, 00:21:08.422 "ddgst": false 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 },{ 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme3", 00:21:08.422 "trtype": "tcp", 00:21:08.422 "traddr": "10.0.0.2", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "4420", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:08.422 "hdgst": false, 00:21:08.422 "ddgst": false 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 },{ 00:21:08.422 "params": { 00:21:08.422 "name": "Nvme4", 00:21:08.422 "trtype": "tcp", 00:21:08.422 "traddr": "10.0.0.2", 00:21:08.422 "adrfam": "ipv4", 00:21:08.422 "trsvcid": "4420", 00:21:08.422 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:08.422 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:08.422 "hdgst": false, 00:21:08.422 "ddgst": false 00:21:08.422 }, 00:21:08.422 "method": "bdev_nvme_attach_controller" 00:21:08.422 },{ 00:21:08.422 "params": { 00:21:08.423 "name": "Nvme5", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 },{ 00:21:08.423 "params": { 00:21:08.423 "name": "Nvme6", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 },{ 00:21:08.423 "params": { 00:21:08.423 "name": "Nvme7", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 },{ 00:21:08.423 "params": { 00:21:08.423 "name": "Nvme8", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 },{ 00:21:08.423 "params": { 00:21:08.423 "name": "Nvme9", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 },{ 00:21:08.423 "params": { 00:21:08.423 "name": "Nvme10", 00:21:08.423 "trtype": "tcp", 00:21:08.423 "traddr": "10.0.0.2", 00:21:08.423 "adrfam": "ipv4", 00:21:08.423 "trsvcid": "4420", 00:21:08.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:08.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:08.423 "hdgst": false, 00:21:08.423 "ddgst": false 00:21:08.423 }, 00:21:08.423 "method": "bdev_nvme_attach_controller" 00:21:08.423 }' 00:21:08.423 [2024-12-11 15:02:01.327259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:08.423 [2024-12-11 15:02:01.367925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.322 Running I/O for 10 seconds... 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:10.322 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=79 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 79 -ge 100 ']' 00:21:10.581 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=200 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 200 -ge 100 ']' 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3169721 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3169721 ']' 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3169721 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169721 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169721' 00:21:10.839 killing process with pid 3169721 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3169721 00:21:10.839 15:02:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3169721 00:21:11.098 Received shutdown signal, test time was about 0.940366 seconds 00:21:11.098 00:21:11.098 Latency(us) 00:21:11.098 [2024-12-11T14:02:04.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.098 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme1n1 : 0.92 283.07 17.69 0.00 0.00 222980.43 3932.16 218833.25 00:21:11.098 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme2n1 : 0.92 282.52 17.66 0.00 0.00 219594.26 4388.06 219745.06 00:21:11.098 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme3n1 : 0.91 286.56 17.91 0.00 0.00 211832.74 7294.44 219745.06 00:21:11.098 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme4n1 : 0.91 280.67 17.54 0.00 0.00 213473.28 14702.86 216097.84 00:21:11.098 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme5n1 : 0.93 275.23 17.20 0.00 0.00 214133.09 18350.08 224304.08 00:21:11.098 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme6n1 : 0.93 274.02 17.13 0.00 0.00 211205.12 18464.06 219745.06 00:21:11.098 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme7n1 : 0.93 276.26 17.27 0.00 0.00 205365.43 15044.79 218833.25 00:21:11.098 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme8n1 : 0.94 273.25 17.08 0.00 0.00 204016.64 14588.88 226127.69 00:21:11.098 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme9n1 : 0.94 272.43 17.03 0.00 0.00 200592.25 15614.66 225215.89 00:21:11.098 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:11.098 Verification LBA range: start 0x0 length 0x400 00:21:11.098 Nvme10n1 : 0.90 212.33 13.27 0.00 0.00 250871.69 17210.32 246187.41 00:21:11.098 [2024-12-11T14:02:04.146Z] =================================================================================================================== 00:21:11.098 [2024-12-11T14:02:04.146Z] Total : 2716.34 169.77 0.00 0.00 214517.91 3932.16 246187.41 00:21:11.098 15:02:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:12.472 rmmod nvme_tcp 00:21:12.472 rmmod nvme_fabrics 00:21:12.472 rmmod nvme_keyring 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3169441 ']' 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3169441 ']' 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3169441' 00:21:12.472 killing process with pid 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3169441 00:21:12.472 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3169441 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.732 15:02:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.638 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.638 00:21:14.638 real 0m8.248s 00:21:14.638 user 0m25.583s 00:21:14.638 sys 0m1.414s 00:21:14.638 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.638 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:14.638 ************************************ 00:21:14.638 END TEST nvmf_shutdown_tc2 00:21:14.638 ************************************ 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:14.898 ************************************ 00:21:14.898 START TEST nvmf_shutdown_tc3 00:21:14.898 ************************************ 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:14.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:14.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:14.898 Found net devices under 0000:86:00.0: cvl_0_0 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.898 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:14.899 Found net devices under 0000:86:00.1: cvl_0_1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:14.899 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.158 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.158 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.158 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.158 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:21:15.158 00:21:15.158 --- 10.0.0.2 ping statistics --- 00:21:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.158 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:21:15.158 15:02:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:21:15.158 00:21:15.158 --- 10.0.0.1 ping statistics --- 00:21:15.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.158 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3170944 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3170944 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3170944 ']' 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.158 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.158 [2024-12-11 15:02:08.106663] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:15.158 [2024-12-11 15:02:08.106709] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.158 [2024-12-11 15:02:08.185104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.416 [2024-12-11 15:02:08.225938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.416 [2024-12-11 15:02:08.225974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.416 [2024-12-11 15:02:08.225981] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.416 [2024-12-11 15:02:08.225987] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.416 [2024-12-11 15:02:08.225992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.416 [2024-12-11 15:02:08.227624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.417 [2024-12-11 15:02:08.227733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.417 [2024-12-11 15:02:08.227839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.417 [2024-12-11 15:02:08.227841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.982 [2024-12-11 15:02:08.979720] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:15.982 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.240 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.240 Malloc1 00:21:16.240 [2024-12-11 15:02:09.089149] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.240 Malloc2 00:21:16.240 Malloc3 00:21:16.240 Malloc4 00:21:16.240 Malloc5 00:21:16.240 Malloc6 00:21:16.498 Malloc7 00:21:16.498 Malloc8 00:21:16.498 Malloc9 00:21:16.498 Malloc10 00:21:16.498 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.498 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:16.498 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3171249 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3171249 /var/tmp/bdevperf.sock 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3171249 ']' 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.499 { 00:21:16.499 "params": { 00:21:16.499 "name": "Nvme$subsystem", 00:21:16.499 "trtype": "$TEST_TRANSPORT", 00:21:16.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.499 "adrfam": "ipv4", 00:21:16.499 "trsvcid": "$NVMF_PORT", 00:21:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.499 "hdgst": ${hdgst:-false}, 00:21:16.499 "ddgst": ${ddgst:-false} 00:21:16.499 }, 00:21:16.499 "method": "bdev_nvme_attach_controller" 00:21:16.499 } 00:21:16.499 EOF 00:21:16.499 )") 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.499 { 00:21:16.499 "params": { 00:21:16.499 "name": "Nvme$subsystem", 00:21:16.499 "trtype": "$TEST_TRANSPORT", 00:21:16.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.499 "adrfam": "ipv4", 00:21:16.499 "trsvcid": "$NVMF_PORT", 00:21:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.499 "hdgst": ${hdgst:-false}, 00:21:16.499 "ddgst": ${ddgst:-false} 00:21:16.499 }, 00:21:16.499 "method": "bdev_nvme_attach_controller" 00:21:16.499 } 00:21:16.499 EOF 00:21:16.499 )") 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.499 { 00:21:16.499 "params": { 00:21:16.499 "name": "Nvme$subsystem", 00:21:16.499 "trtype": "$TEST_TRANSPORT", 00:21:16.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.499 "adrfam": "ipv4", 00:21:16.499 "trsvcid": "$NVMF_PORT", 00:21:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.499 "hdgst": ${hdgst:-false}, 00:21:16.499 "ddgst": ${ddgst:-false} 00:21:16.499 }, 00:21:16.499 "method": "bdev_nvme_attach_controller" 00:21:16.499 } 00:21:16.499 EOF 00:21:16.499 )") 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.499 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.499 { 00:21:16.499 "params": { 00:21:16.499 "name": "Nvme$subsystem", 00:21:16.499 "trtype": "$TEST_TRANSPORT", 00:21:16.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.499 "adrfam": "ipv4", 00:21:16.499 "trsvcid": "$NVMF_PORT", 00:21:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.499 "hdgst": ${hdgst:-false}, 00:21:16.499 "ddgst": ${ddgst:-false} 00:21:16.499 }, 00:21:16.499 "method": "bdev_nvme_attach_controller" 00:21:16.499 } 00:21:16.499 EOF 00:21:16.499 )") 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.757 { 00:21:16.757 "params": { 00:21:16.757 "name": "Nvme$subsystem", 00:21:16.757 "trtype": "$TEST_TRANSPORT", 00:21:16.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.757 "adrfam": "ipv4", 00:21:16.757 "trsvcid": "$NVMF_PORT", 00:21:16.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.757 "hdgst": ${hdgst:-false}, 00:21:16.757 "ddgst": ${ddgst:-false} 00:21:16.757 }, 00:21:16.757 "method": "bdev_nvme_attach_controller" 00:21:16.757 } 00:21:16.757 EOF 00:21:16.757 )") 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.757 { 00:21:16.757 "params": { 00:21:16.757 "name": "Nvme$subsystem", 00:21:16.757 "trtype": "$TEST_TRANSPORT", 00:21:16.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.757 "adrfam": "ipv4", 00:21:16.757 "trsvcid": "$NVMF_PORT", 00:21:16.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.757 "hdgst": ${hdgst:-false}, 00:21:16.757 "ddgst": ${ddgst:-false} 00:21:16.757 }, 00:21:16.757 "method": "bdev_nvme_attach_controller" 00:21:16.757 } 00:21:16.757 EOF 00:21:16.757 )") 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.757 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.757 { 00:21:16.757 "params": { 00:21:16.757 "name": "Nvme$subsystem", 00:21:16.757 "trtype": "$TEST_TRANSPORT", 00:21:16.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "$NVMF_PORT", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.758 "hdgst": ${hdgst:-false}, 00:21:16.758 "ddgst": ${ddgst:-false} 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 } 00:21:16.758 EOF 00:21:16.758 )") 00:21:16.758 [2024-12-11 15:02:09.567056] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:16.758 [2024-12-11 15:02:09.567106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171249 ] 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.758 { 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme$subsystem", 00:21:16.758 "trtype": "$TEST_TRANSPORT", 00:21:16.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "$NVMF_PORT", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.758 "hdgst": ${hdgst:-false}, 00:21:16.758 "ddgst": ${ddgst:-false} 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 } 00:21:16.758 EOF 00:21:16.758 )") 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.758 { 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme$subsystem", 00:21:16.758 "trtype": "$TEST_TRANSPORT", 00:21:16.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "$NVMF_PORT", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.758 "hdgst": ${hdgst:-false}, 00:21:16.758 "ddgst": ${ddgst:-false} 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 } 00:21:16.758 EOF 00:21:16.758 )") 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:16.758 { 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme$subsystem", 00:21:16.758 "trtype": "$TEST_TRANSPORT", 00:21:16.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "$NVMF_PORT", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:16.758 "hdgst": ${hdgst:-false}, 00:21:16.758 "ddgst": ${ddgst:-false} 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 } 00:21:16.758 EOF 00:21:16.758 )") 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:21:16.758 15:02:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme1", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme2", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme3", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme4", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme5", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme6", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme7", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme8", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme9", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 },{ 00:21:16.758 "params": { 00:21:16.758 "name": "Nvme10", 00:21:16.758 "trtype": "tcp", 00:21:16.758 "traddr": "10.0.0.2", 00:21:16.758 "adrfam": "ipv4", 00:21:16.758 "trsvcid": "4420", 00:21:16.758 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:16.758 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:16.758 "hdgst": false, 00:21:16.758 "ddgst": false 00:21:16.758 }, 00:21:16.758 "method": "bdev_nvme_attach_controller" 00:21:16.758 }' 00:21:16.758 [2024-12-11 15:02:09.642974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.758 [2024-12-11 15:02:09.683678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.135 Running I/O for 10 seconds... 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:21:18.702 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:18.960 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:18.961 15:02:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3170944 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3170944 ']' 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3170944 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3170944 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:19.234 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3170944' 00:21:19.235 killing process with pid 3170944 00:21:19.235 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3170944 00:21:19.235 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3170944 00:21:19.235 [2024-12-11 15:02:12.168694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.168998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.169182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1d70 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.235 [2024-12-11 15:02:12.170615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.170909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a647a0 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c550 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.236 [2024-12-11 15:02:12.176230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.236 [2024-12-11 15:02:12.176237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3e50 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.236 [2024-12-11 15:02:12.176620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.176875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2240 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.237 [2024-12-11 15:02:12.179508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.179597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2710 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.180927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f2c00 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.238 [2024-12-11 15:02:12.181608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.181867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f30d0 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.182838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3450 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.183650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3920 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.183670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3920 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.183677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3920 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.239 [2024-12-11 15:02:12.184262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.184590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f3e10 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.240 [2024-12-11 15:02:12.185427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.185596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f42e0 is same with the state(6) to be set 00:21:19.241 2152.00 IOPS, 134.50 MiB/s [2024-12-11T14:02:12.289Z] [2024-12-11 15:02:12.196184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c8190 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403760 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa39c0 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8610 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c550 (9): Bad file descriptor 00:21:19.241 [2024-12-11 15:02:12.196583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cf040 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.241 [2024-12-11 15:02:12.196730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.241 [2024-12-11 15:02:12.196736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f97e90 is same with the state(6) to be set 00:21:19.241 [2024-12-11 15:02:12.196759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99250 is same with the state(6) to be set 00:21:19.242 [2024-12-11 15:02:12.196832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3e50 (9): Bad file descriptor 00:21:19.242 [2024-12-11 15:02:12.196862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.242 [2024-12-11 15:02:12.196918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.196924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99b60 is same with the state(6) to be set 00:21:19.242 [2024-12-11 15:02:12.209631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.209987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.209996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.242 [2024-12-11 15:02:12.210168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.242 [2024-12-11 15:02:12.210177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.243 [2024-12-11 15:02:12.210683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.243 [2024-12-11 15:02:12.210920] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.243 [2024-12-11 15:02:12.210983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c8190 (9): Bad file descriptor 00:21:19.243 [2024-12-11 15:02:12.211002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403760 (9): Bad file descriptor 00:21:19.243 [2024-12-11 15:02:12.211014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa39c0 (9): Bad file descriptor 00:21:19.243 [2024-12-11 15:02:12.211028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb8610 (9): Bad file descriptor 00:21:19.243 [2024-12-11 15:02:12.211047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cf040 (9): Bad file descriptor 00:21:19.243 [2024-12-11 15:02:12.211064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f97e90 (9): Bad file descriptor 00:21:19.244 [2024-12-11 15:02:12.211079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99250 (9): Bad file descriptor 00:21:19.244 [2024-12-11 15:02:12.211099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99b60 (9): Bad file descriptor 00:21:19.244 [2024-12-11 15:02:12.211149] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.244 [2024-12-11 15:02:12.211398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.211989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.211997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.212004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.212013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.244 [2024-12-11 15:02:12.212020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.244 [2024-12-11 15:02:12.212028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.212409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.245 [2024-12-11 15:02:12.213823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.245 [2024-12-11 15:02:12.213831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.213984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.213998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.246 [2024-12-11 15:02:12.214466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.246 [2024-12-11 15:02:12.214474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.214590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.214598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a7f10 is same with the state(6) to be set 00:21:19.247 [2024-12-11 15:02:12.216898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.216922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.216935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.216942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.216955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.216972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.216979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.216987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.247 [2024-12-11 15:02:12.217368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.247 [2024-12-11 15:02:12.217377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.217620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.217628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.224985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.224993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.225002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.225012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.225021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.225028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.225038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.225045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.225056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2230f40 is same with the state(6) to be set 00:21:19.248 [2024-12-11 15:02:12.226144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:19.248 [2024-12-11 15:02:12.226175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.248 [2024-12-11 15:02:12.226247] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:21:19.248 [2024-12-11 15:02:12.226273] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:19.248 [2024-12-11 15:02:12.226352] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.248 [2024-12-11 15:02:12.226403] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.248 [2024-12-11 15:02:12.226450] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.248 [2024-12-11 15:02:12.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.226488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.248 [2024-12-11 15:02:12.226501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.248 [2024-12-11 15:02:12.226510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.226985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.226992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.249 [2024-12-11 15:02:12.227200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.249 [2024-12-11 15:02:12.227210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.227554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.227624] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:19.250 [2024-12-11 15:02:12.227939] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.250 [2024-12-11 15:02:12.227991] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:19.250 [2024-12-11 15:02:12.228018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.250 [2024-12-11 15:02:12.228248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.250 [2024-12-11 15:02:12.228266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb8610 with addr=10.0.0.2, port=4420 00:21:19.250 [2024-12-11 15:02:12.228276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8610 is same with the state(6) to be set 00:21:19.250 [2024-12-11 15:02:12.228368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.250 [2024-12-11 15:02:12.228380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa3e50 with addr=10.0.0.2, port=4420 00:21:19.250 [2024-12-11 15:02:12.228389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3e50 is same with the state(6) to be set 00:21:19.250 [2024-12-11 15:02:12.228674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.250 [2024-12-11 15:02:12.228951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.250 [2024-12-11 15:02:12.228961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.228968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.228983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.228992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.251 [2024-12-11 15:02:12.229711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.251 [2024-12-11 15:02:12.229720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.229861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.229871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a90e0 is same with the state(6) to be set 00:21:19.252 [2024-12-11 15:02:12.230976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.230994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.252 [2024-12-11 15:02:12.231446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.252 [2024-12-11 15:02:12.231455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.231983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.231993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.232102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.232109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239b6b0 is same with the state(6) to be set 00:21:19.253 [2024-12-11 15:02:12.233219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.233235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.233246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.253 [2024-12-11 15:02:12.233254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.253 [2024-12-11 15:02:12.233264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.254 [2024-12-11 15:02:12.233915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.254 [2024-12-11 15:02:12.233922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.233931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.233938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.233947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.233953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.233963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.233970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.233979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.233986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.233994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.234275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a8880 is same with the state(6) to be set 00:21:19.255 [2024-12-11 15:02:12.236424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.255 [2024-12-11 15:02:12.236745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.255 [2024-12-11 15:02:12.236754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.256 [2024-12-11 15:02:12.237333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.256 [2024-12-11 15:02:12.237340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.237468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.237476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x32f2c00 is same with the state(6) to be set 00:21:19.257 [2024-12-11 15:02:12.238483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.238981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.257 [2024-12-11 15:02:12.238994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.257 [2024-12-11 15:02:12.239002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.239442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.239449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.258 [2024-12-11 15:02:12.242990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.258 [2024-12-11 15:02:12.242999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222fc10 is same with the state(6) to be set 00:21:19.258 [2024-12-11 15:02:12.244210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:19.258 [2024-12-11 15:02:12.244236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:19.258 [2024-12-11 15:02:12.244249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:19.258 [2024-12-11 15:02:12.244262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:19.258 [2024-12-11 15:02:12.244273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:19.258 [2024-12-11 15:02:12.244504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.258 [2024-12-11 15:02:12.244522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240c550 with addr=10.0.0.2, port=4420 00:21:19.258 [2024-12-11 15:02:12.244531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c550 is same with the state(6) to be set 00:21:19.258 [2024-12-11 15:02:12.244545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb8610 (9): Bad file descriptor 00:21:19.258 [2024-12-11 15:02:12.244557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3e50 (9): Bad file descriptor 00:21:19.258 [2024-12-11 15:02:12.244582] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:21:19.258 [2024-12-11 15:02:12.244596] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:19.258 [2024-12-11 15:02:12.244613] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:19.258 [2024-12-11 15:02:12.244625] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:21:19.258 [2024-12-11 15:02:12.244636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c550 (9): Bad file descriptor 00:21:19.258 [2024-12-11 15:02:12.244739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:19.258 task offset: 40576 on job bdev=Nvme7n1 fails 00:21:19.258 00:21:19.258 Latency(us) 00:21:19.258 [2024-12-11T14:02:12.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.259 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme1n1 ended in about 1.04 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme1n1 : 1.04 185.49 11.59 61.83 0.00 256372.87 18122.13 223392.28 00:21:19.259 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme2n1 ended in about 1.05 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme2n1 : 1.05 182.79 11.42 60.93 0.00 256155.83 18350.08 226127.69 00:21:19.259 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme3n1 ended in about 1.05 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme3n1 : 1.05 243.20 15.20 60.80 0.00 202158.26 15956.59 218833.25 00:21:19.259 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme4n1 ended in about 1.05 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme4n1 : 1.05 242.71 15.17 60.68 0.00 199407.88 14816.83 209715.20 00:21:19.259 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme5n1 ended in about 1.06 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme5n1 : 1.06 203.66 12.73 58.73 0.00 226796.84 8377.21 255305.46 00:21:19.259 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme6n1 ended in about 1.04 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme6n1 : 1.04 247.03 15.44 61.76 0.00 189383.90 6867.03 225215.89 00:21:19.259 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme7n1 ended in about 1.03 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme7n1 : 1.03 247.82 15.49 61.96 0.00 185538.11 15614.66 223392.28 00:21:19.259 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme8n1 ended in about 1.06 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme8n1 : 1.06 181.49 11.34 60.50 0.00 234204.16 14816.83 222480.47 00:21:19.259 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme9n1 ended in about 1.06 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme9n1 : 1.06 180.54 11.28 60.18 0.00 231640.15 19147.91 230686.72 00:21:19.259 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:19.259 Job: Nvme10n1 ended in about 1.05 seconds with error 00:21:19.259 Verification LBA range: start 0x0 length 0x400 00:21:19.259 Nvme10n1 : 1.05 183.63 11.48 61.21 0.00 223214.19 18805.98 244363.80 00:21:19.259 [2024-12-11T14:02:12.307Z] =================================================================================================================== 00:21:19.259 [2024-12-11T14:02:12.307Z] Total : 2098.36 131.15 608.57 0.00 218154.83 6867.03 255305.46 00:21:19.518 [2024-12-11 15:02:12.277329] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:19.519 [2024-12-11 15:02:12.277380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:19.519 [2024-12-11 15:02:12.277552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.277571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f99250 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.277583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99250 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.277746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.277759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f97e90 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.277768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f97e90 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.277890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa39c0 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.277911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa39c0 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.277985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.277997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f99b60 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.278005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99b60 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.278127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.278139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cf040 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.278147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cf040 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.278163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.278171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.278179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.278189] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.278198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.278204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.278212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.278218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.279799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.279823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2403760 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.279834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2403760 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.279985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.279997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c8190 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.280005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c8190 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.280018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99250 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f97e90 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa39c0 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99b60 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cf040 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280093] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280143] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:21:19.519 [2024-12-11 15:02:12.280156] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:19.519 [2024-12-11 15:02:12.280198] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:21:19.519 [2024-12-11 15:02:12.280210] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:19.519 [2024-12-11 15:02:12.280220] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:21:19.519 [2024-12-11 15:02:12.280308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2403760 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c8190 (9): Bad file descriptor 00:21:19.519 [2024-12-11 15:02:12.280330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280467] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:19.519 [2024-12-11 15:02:12.280545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:19.519 [2024-12-11 15:02:12.280555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:19.519 [2024-12-11 15:02:12.280579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:19.519 [2024-12-11 15:02:12.280615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:19.519 [2024-12-11 15:02:12.280623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:19.519 [2024-12-11 15:02:12.280629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:19.519 [2024-12-11 15:02:12.280859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.280874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa3e50 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.280883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa3e50 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.281041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eb8610 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.281060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eb8610 is same with the state(6) to be set 00:21:19.519 [2024-12-11 15:02:12.281216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:19.519 [2024-12-11 15:02:12.281228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x240c550 with addr=10.0.0.2, port=4420 00:21:19.519 [2024-12-11 15:02:12.281236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240c550 is same with the state(6) to be set 00:21:19.520 [2024-12-11 15:02:12.281265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa3e50 (9): Bad file descriptor 00:21:19.520 [2024-12-11 15:02:12.281277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb8610 (9): Bad file descriptor 00:21:19.520 [2024-12-11 15:02:12.281287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240c550 (9): Bad file descriptor 00:21:19.520 [2024-12-11 15:02:12.281314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:19.520 [2024-12-11 15:02:12.281325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:19.520 [2024-12-11 15:02:12.281333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:19.520 [2024-12-11 15:02:12.281340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:19.520 [2024-12-11 15:02:12.281349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:19.520 [2024-12-11 15:02:12.281355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:19.520 [2024-12-11 15:02:12.281362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:19.520 [2024-12-11 15:02:12.281369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:19.520 [2024-12-11 15:02:12.281376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:19.520 [2024-12-11 15:02:12.281382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:19.520 [2024-12-11 15:02:12.281392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:19.520 [2024-12-11 15:02:12.281398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:19.778 15:02:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3171249 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3171249 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3171249 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.724 rmmod nvme_tcp 00:21:20.724 rmmod nvme_fabrics 00:21:20.724 rmmod nvme_keyring 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3170944 ']' 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3170944 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3170944 ']' 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3170944 00:21:20.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (3170944) - No such process 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3170944 is not found' 00:21:20.724 Process with pid 3170944 is not found 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.724 15:02:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.263 00:21:23.263 real 0m8.035s 00:21:23.263 user 0m20.176s 00:21:23.263 sys 0m1.440s 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:23.263 ************************************ 00:21:23.263 END TEST nvmf_shutdown_tc3 00:21:23.263 ************************************ 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:23.263 ************************************ 00:21:23.263 START TEST nvmf_shutdown_tc4 00:21:23.263 ************************************ 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.263 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.264 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.264 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.264 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.264 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.264 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:23.265 15:02:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:23.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:21:23.265 00:21:23.265 --- 10.0.0.2 ping statistics --- 00:21:23.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.265 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:21:23.265 00:21:23.265 --- 10.0.0.1 ping statistics --- 00:21:23.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.265 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3172328 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3172328 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3172328 ']' 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.265 15:02:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:23.524 [2024-12-11 15:02:16.310853] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:23.524 [2024-12-11 15:02:16.310896] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.524 [2024-12-11 15:02:16.392092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.524 [2024-12-11 15:02:16.433062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.524 [2024-12-11 15:02:16.433102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.524 [2024-12-11 15:02:16.433109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.524 [2024-12-11 15:02:16.433116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.524 [2024-12-11 15:02:16.433121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.524 [2024-12-11 15:02:16.434745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.524 [2024-12-11 15:02:16.434857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.524 [2024-12-11 15:02:16.434964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.524 [2024-12-11 15:02:16.434965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.461 [2024-12-11 15:02:17.186062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.461 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.461 Malloc1 00:21:24.461 [2024-12-11 15:02:17.311182] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.461 Malloc2 00:21:24.461 Malloc3 00:21:24.461 Malloc4 00:21:24.461 Malloc5 00:21:24.461 Malloc6 00:21:24.720 Malloc7 00:21:24.720 Malloc8 00:21:24.720 Malloc9 00:21:24.720 Malloc10 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3172604 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:24.720 15:02:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:24.978 [2024-12-11 15:02:17.818715] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3172328 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3172328 ']' 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3172328 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3172328 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3172328' 00:21:30.258 killing process with pid 3172328 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3172328 00:21:30.258 15:02:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3172328 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 [2024-12-11 15:02:22.809750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a0180 is same with the state(6) to be set 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 [2024-12-11 15:02:22.810349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 [2024-12-11 15:02:22.811199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.258 Write completed with error (sct=0, sc=8) 00:21:30.258 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 [2024-12-11 15:02:22.812225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.259 NVMe io qpair process completion error 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 [2024-12-11 15:02:22.815893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.259 starting I/O failed: -6 00:21:30.259 [2024-12-11 15:02:22.816027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ec40 is same with the state(6) to be set 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 [2024-12-11 15:02:22.816065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ec40 is same with the state(6) to be set 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 [2024-12-11 15:02:22.816302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ddd0 is same with the state(6) to be set 00:21:30.259 starting I/O failed: -6 00:21:30.259 [2024-12-11 15:02:22.816328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ddd0 is same with the state(6) to be set 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 [2024-12-11 15:02:22.816337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ddd0 is same with starting I/O failed: -6 00:21:30.259 the state(6) to be set 00:21:30.259 [2024-12-11 15:02:22.816347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ddd0 is same with the state(6) to be set 00:21:30.259 [2024-12-11 15:02:22.816353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ddd0 is same with Write completed with error (sct=0, sc=8) 00:21:30.259 the state(6) to be set 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 [2024-12-11 15:02:22.816793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 starting I/O failed: -6 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.259 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 [2024-12-11 15:02:22.817976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 [2024-12-11 15:02:22.819554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.260 NVMe io qpair process completion error 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 [2024-12-11 15:02:22.820067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160cf60 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 [2024-12-11 15:02:22.820100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160cf60 is same with the state(6) to be set 00:21:30.260 [2024-12-11 15:02:22.820109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160cf60 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 [2024-12-11 15:02:22.820116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160cf60 is same with the state(6) to be set 00:21:30.260 [2024-12-11 15:02:22.820123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160cf60 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 [2024-12-11 15:02:22.820620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.260 [2024-12-11 15:02:22.820738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d900 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 [2024-12-11 15:02:22.820760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d900 is same with the state(6) to be set 00:21:30.260 [2024-12-11 15:02:22.820768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d900 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 [2024-12-11 15:02:22.820776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d900 is same with the state(6) to be set 00:21:30.260 starting I/O failed: -6 00:21:30.260 [2024-12-11 15:02:22.820783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160d900 is same with the state(6) to be set 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.260 Write completed with error (sct=0, sc=8) 00:21:30.260 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 [2024-12-11 15:02:22.821052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 [2024-12-11 15:02:22.821073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 [2024-12-11 15:02:22.821080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 [2024-12-11 15:02:22.821088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 [2024-12-11 15:02:22.821095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 starting I/O failed: -6 00:21:30.261 [2024-12-11 15:02:22.821101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 [2024-12-11 15:02:22.821112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 [2024-12-11 15:02:22.821119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 starting I/O failed: -6 00:21:30.261 [2024-12-11 15:02:22.821125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 [2024-12-11 15:02:22.821131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x160ca90 is same with the state(6) to be set 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 [2024-12-11 15:02:22.821515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 [2024-12-11 15:02:22.822516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.261 starting I/O failed: -6 00:21:30.261 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 [2024-12-11 15:02:22.824207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.262 NVMe io qpair process completion error 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 [2024-12-11 15:02:22.825153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 [2024-12-11 15:02:22.826079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.262 starting I/O failed: -6 00:21:30.262 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 [2024-12-11 15:02:22.827096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 [2024-12-11 15:02:22.828949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.263 NVMe io qpair process completion error 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 Write completed with error (sct=0, sc=8) 00:21:30.263 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 [2024-12-11 15:02:22.830001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 [2024-12-11 15:02:22.830903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 [2024-12-11 15:02:22.831914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.264 Write completed with error (sct=0, sc=8) 00:21:30.264 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 [2024-12-11 15:02:22.833480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.265 NVMe io qpair process completion error 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 [2024-12-11 15:02:22.834572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 [2024-12-11 15:02:22.835511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.265 starting I/O failed: -6 00:21:30.265 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 [2024-12-11 15:02:22.836527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 [2024-12-11 15:02:22.839401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.266 NVMe io qpair process completion error 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 starting I/O failed: -6 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 Write completed with error (sct=0, sc=8) 00:21:30.266 [2024-12-11 15:02:22.840419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 [2024-12-11 15:02:22.841345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 [2024-12-11 15:02:22.842355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.267 Write completed with error (sct=0, sc=8) 00:21:30.267 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 [2024-12-11 15:02:22.844926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.268 NVMe io qpair process completion error 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 [2024-12-11 15:02:22.845994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 [2024-12-11 15:02:22.846872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.268 Write completed with error (sct=0, sc=8) 00:21:30.268 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 [2024-12-11 15:02:22.847883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 [2024-12-11 15:02:22.849787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.269 NVMe io qpair process completion error 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 [2024-12-11 15:02:22.851011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.269 starting I/O failed: -6 00:21:30.269 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 [2024-12-11 15:02:22.851840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 [2024-12-11 15:02:22.852921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.270 Write completed with error (sct=0, sc=8) 00:21:30.270 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 [2024-12-11 15:02:22.857083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.271 NVMe io qpair process completion error 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 [2024-12-11 15:02:22.862438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:30.271 starting I/O failed: -6 00:21:30.271 starting I/O failed: -6 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 [2024-12-11 15:02:22.863396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 Write completed with error (sct=0, sc=8) 00:21:30.271 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 [2024-12-11 15:02:22.864405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 Write completed with error (sct=0, sc=8) 00:21:30.272 starting I/O failed: -6 00:21:30.272 [2024-12-11 15:02:22.867330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:30.272 NVMe io qpair process completion error 00:21:30.272 Initializing NVMe Controllers 00:21:30.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:30.272 Controller IO queue size 128, less than required. 00:21:30.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.272 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:30.272 Controller IO queue size 128, less than required. 00:21:30.272 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:30.273 Controller IO queue size 128, less than required. 00:21:30.273 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:30.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:30.273 Initialization complete. Launching workers. 00:21:30.273 ======================================================== 00:21:30.273 Latency(us) 00:21:30.273 Device Information : IOPS MiB/s Average min max 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2142.61 92.07 59745.39 917.37 108678.70 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2117.39 90.98 60470.47 666.39 128875.94 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2164.66 93.01 59176.47 919.52 108990.99 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2181.83 93.75 58732.93 905.16 103628.35 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2157.88 92.72 59402.70 535.72 102655.68 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2139.64 91.94 59621.42 706.24 119064.77 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2126.08 91.35 60343.61 678.18 121343.90 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2098.09 90.15 60499.57 915.83 98820.77 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2189.46 94.08 57986.07 910.48 98945.44 00:21:30.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2137.74 91.86 59399.63 952.43 101156.78 00:21:30.273 ======================================================== 00:21:30.273 Total : 21455.38 921.91 59528.81 535.72 128875.94 00:21:30.273 00:21:30.273 [2024-12-11 15:02:22.870286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637ef0 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637560 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x638a70 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637bc0 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x638410 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x639900 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x638740 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x637890 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x639ae0 is same with the state(6) to be set 00:21:30.273 [2024-12-11 15:02:22.870569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x639720 is same with the state(6) to be set 00:21:30.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:30.273 15:02:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3172604 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3172604 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3172604 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.212 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.212 rmmod nvme_tcp 00:21:31.212 rmmod nvme_fabrics 00:21:31.212 rmmod nvme_keyring 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3172328 ']' 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3172328 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3172328 ']' 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3172328 00:21:31.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (3172328) - No such process 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3172328 is not found' 00:21:31.472 Process with pid 3172328 is not found 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.472 15:02:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:33.377 00:21:33.377 real 0m10.509s 00:21:33.377 user 0m27.656s 00:21:33.377 sys 0m5.103s 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:33.377 ************************************ 00:21:33.377 END TEST nvmf_shutdown_tc4 00:21:33.377 ************************************ 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:33.377 00:21:33.377 real 0m42.448s 00:21:33.377 user 1m46.606s 00:21:33.377 sys 0m14.065s 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.377 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.377 ************************************ 00:21:33.377 END TEST nvmf_shutdown 00:21:33.377 ************************************ 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:33.637 ************************************ 00:21:33.637 START TEST nvmf_nsid 00:21:33.637 ************************************ 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:33.637 * Looking for test storage... 00:21:33.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.637 --rc genhtml_branch_coverage=1 00:21:33.637 --rc genhtml_function_coverage=1 00:21:33.637 --rc genhtml_legend=1 00:21:33.637 --rc geninfo_all_blocks=1 00:21:33.637 --rc geninfo_unexecuted_blocks=1 00:21:33.637 00:21:33.637 ' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.637 --rc genhtml_branch_coverage=1 00:21:33.637 --rc genhtml_function_coverage=1 00:21:33.637 --rc genhtml_legend=1 00:21:33.637 --rc geninfo_all_blocks=1 00:21:33.637 --rc geninfo_unexecuted_blocks=1 00:21:33.637 00:21:33.637 ' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.637 --rc genhtml_branch_coverage=1 00:21:33.637 --rc genhtml_function_coverage=1 00:21:33.637 --rc genhtml_legend=1 00:21:33.637 --rc geninfo_all_blocks=1 00:21:33.637 --rc geninfo_unexecuted_blocks=1 00:21:33.637 00:21:33.637 ' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:33.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.637 --rc genhtml_branch_coverage=1 00:21:33.637 --rc genhtml_function_coverage=1 00:21:33.637 --rc genhtml_legend=1 00:21:33.637 --rc geninfo_all_blocks=1 00:21:33.637 --rc geninfo_unexecuted_blocks=1 00:21:33.637 00:21:33.637 ' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.637 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:33.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:33.638 15:02:26 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:40.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:40.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:40.299 Found net devices under 0000:86:00.0: cvl_0_0 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:40.299 Found net devices under 0000:86:00.1: cvl_0_1 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:40.299 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:40.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:21:40.300 00:21:40.300 --- 10.0.0.2 ping statistics --- 00:21:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.300 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:40.300 00:21:40.300 --- 10.0.0.1 ping statistics --- 00:21:40.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.300 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3177260 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3177260 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3177260 ']' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.300 [2024-12-11 15:02:32.621493] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:40.300 [2024-12-11 15:02:32.621539] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.300 [2024-12-11 15:02:32.702096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.300 [2024-12-11 15:02:32.744840] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.300 [2024-12-11 15:02:32.744872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.300 [2024-12-11 15:02:32.744880] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.300 [2024-12-11 15:02:32.744886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.300 [2024-12-11 15:02:32.744892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.300 [2024-12-11 15:02:32.745466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3177303 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=29180437-f3f8-4b8b-98e0-a977444be7f1 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=dd7ff209-be47-46a1-b166-230ea45f6f82 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=81c29bb7-7be9-4134-87df-34461eb49c97 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.300 null0 00:21:40.300 null1 00:21:40.300 null2 00:21:40.300 [2024-12-11 15:02:32.932746] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:40.300 [2024-12-11 15:02:32.932791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3177303 ] 00:21:40.300 [2024-12-11 15:02:32.935273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.300 [2024-12-11 15:02:32.959505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3177303 /var/tmp/tgt2.sock 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3177303 ']' 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:40.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.300 15:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:40.300 [2024-12-11 15:02:33.006032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.300 [2024-12-11 15:02:33.046992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.300 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.300 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:40.300 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:40.559 [2024-12-11 15:02:33.572083] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.559 [2024-12-11 15:02:33.588194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:40.818 nvme0n1 nvme0n2 00:21:40.818 nvme1n1 00:21:40.818 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:40.818 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:40.818 15:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:41.754 15:02:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:42.687 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.687 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:42.687 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.687 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 29180437-f3f8-4b8b-98e0-a977444be7f1 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=29180437f3f84b8b98e0a977444be7f1 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 29180437F3F84B8B98E0A977444BE7F1 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 29180437F3F84B8B98E0A977444BE7F1 == \2\9\1\8\0\4\3\7\F\3\F\8\4\B\8\B\9\8\E\0\A\9\7\7\4\4\4\B\E\7\F\1 ]] 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid dd7ff209-be47-46a1-b166-230ea45f6f82 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dd7ff209be4746a1b166230ea45f6f82 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DD7FF209BE4746A1B166230EA45F6F82 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ DD7FF209BE4746A1B166230EA45F6F82 == \D\D\7\F\F\2\0\9\B\E\4\7\4\6\A\1\B\1\6\6\2\3\0\E\A\4\5\F\6\F\8\2 ]] 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 81c29bb7-7be9-4134-87df-34461eb49c97 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=81c29bb77be9413487df34461eb49c97 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 81C29BB77BE9413487DF34461EB49C97 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 81C29BB77BE9413487DF34461EB49C97 == \8\1\C\2\9\B\B\7\7\B\E\9\4\1\3\4\8\7\D\F\3\4\4\6\1\E\B\4\9\C\9\7 ]] 00:21:42.946 15:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3177303 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3177303 ']' 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3177303 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3177303 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3177303' 00:21:43.205 killing process with pid 3177303 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3177303 00:21:43.205 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3177303 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.464 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.464 rmmod nvme_tcp 00:21:43.464 rmmod nvme_fabrics 00:21:43.723 rmmod nvme_keyring 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3177260 ']' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3177260 ']' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3177260' 00:21:43.723 killing process with pid 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3177260 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.723 15:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.260 00:21:46.260 real 0m12.362s 00:21:46.260 user 0m9.606s 00:21:46.260 sys 0m5.542s 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:46.260 ************************************ 00:21:46.260 END TEST nvmf_nsid 00:21:46.260 ************************************ 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:46.260 00:21:46.260 real 12m0.390s 00:21:46.260 user 25m43.992s 00:21:46.260 sys 3m44.005s 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:46.260 15:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.260 ************************************ 00:21:46.260 END TEST nvmf_target_extra 00:21:46.260 ************************************ 00:21:46.260 15:02:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.260 15:02:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.260 15:02:38 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.260 15:02:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:46.260 ************************************ 00:21:46.260 START TEST nvmf_host 00:21:46.260 ************************************ 00:21:46.260 15:02:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:46.260 * Looking for test storage... 00:21:46.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:46.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.260 --rc genhtml_branch_coverage=1 00:21:46.260 --rc genhtml_function_coverage=1 00:21:46.260 --rc genhtml_legend=1 00:21:46.260 --rc geninfo_all_blocks=1 00:21:46.260 --rc geninfo_unexecuted_blocks=1 00:21:46.260 00:21:46.260 ' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:46.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.260 --rc genhtml_branch_coverage=1 00:21:46.260 --rc genhtml_function_coverage=1 00:21:46.260 --rc genhtml_legend=1 00:21:46.260 --rc geninfo_all_blocks=1 00:21:46.260 --rc geninfo_unexecuted_blocks=1 00:21:46.260 00:21:46.260 ' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:46.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.260 --rc genhtml_branch_coverage=1 00:21:46.260 --rc genhtml_function_coverage=1 00:21:46.260 --rc genhtml_legend=1 00:21:46.260 --rc geninfo_all_blocks=1 00:21:46.260 --rc geninfo_unexecuted_blocks=1 00:21:46.260 00:21:46.260 ' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:46.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.260 --rc genhtml_branch_coverage=1 00:21:46.260 --rc genhtml_function_coverage=1 00:21:46.260 --rc genhtml_legend=1 00:21:46.260 --rc geninfo_all_blocks=1 00:21:46.260 --rc geninfo_unexecuted_blocks=1 00:21:46.260 00:21:46.260 ' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.260 15:02:39 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.261 ************************************ 00:21:46.261 START TEST nvmf_multicontroller 00:21:46.261 ************************************ 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.261 * Looking for test storage... 00:21:46.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:46.261 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.521 --rc genhtml_branch_coverage=1 00:21:46.521 --rc genhtml_function_coverage=1 00:21:46.521 --rc genhtml_legend=1 00:21:46.521 --rc geninfo_all_blocks=1 00:21:46.521 --rc geninfo_unexecuted_blocks=1 00:21:46.521 00:21:46.521 ' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.521 --rc genhtml_branch_coverage=1 00:21:46.521 --rc genhtml_function_coverage=1 00:21:46.521 --rc genhtml_legend=1 00:21:46.521 --rc geninfo_all_blocks=1 00:21:46.521 --rc geninfo_unexecuted_blocks=1 00:21:46.521 00:21:46.521 ' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.521 --rc genhtml_branch_coverage=1 00:21:46.521 --rc genhtml_function_coverage=1 00:21:46.521 --rc genhtml_legend=1 00:21:46.521 --rc geninfo_all_blocks=1 00:21:46.521 --rc geninfo_unexecuted_blocks=1 00:21:46.521 00:21:46.521 ' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:46.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.521 --rc genhtml_branch_coverage=1 00:21:46.521 --rc genhtml_function_coverage=1 00:21:46.521 --rc genhtml_legend=1 00:21:46.521 --rc geninfo_all_blocks=1 00:21:46.521 --rc geninfo_unexecuted_blocks=1 00:21:46.521 00:21:46.521 ' 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.521 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.522 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.522 15:02:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:53.093 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:53.094 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:53.094 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:53.094 Found net devices under 0000:86:00.0: cvl_0_0 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:53.094 Found net devices under 0000:86:00.1: cvl_0_1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:53.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:53.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:21:53.094 00:21:53.094 --- 10.0.0.2 ping statistics --- 00:21:53.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.094 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:53.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:53.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:53.094 00:21:53.094 --- 10.0.0.1 ping statistics --- 00:21:53.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:53.094 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3181527 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3181527 00:21:53.094 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3181527 ']' 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 [2024-12-11 15:02:45.438765] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:53.095 [2024-12-11 15:02:45.438818] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.095 [2024-12-11 15:02:45.518669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.095 [2024-12-11 15:02:45.560897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.095 [2024-12-11 15:02:45.560935] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.095 [2024-12-11 15:02:45.560942] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.095 [2024-12-11 15:02:45.560949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.095 [2024-12-11 15:02:45.560954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.095 [2024-12-11 15:02:45.562452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.095 [2024-12-11 15:02:45.562552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.095 [2024-12-11 15:02:45.562552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 [2024-12-11 15:02:45.700655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 Malloc0 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 [2024-12-11 15:02:45.760841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 [2024-12-11 15:02:45.772783] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 Malloc1 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3181643 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3181643 /var/tmp/bdevperf.sock 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3181643 ']' 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.095 15:02:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.095 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.095 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:53.095 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:53.095 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.095 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.354 NVMe0n1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.354 1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.354 request: 00:21:53.354 { 00:21:53.354 "name": "NVMe0", 00:21:53.354 "trtype": "tcp", 00:21:53.354 "traddr": "10.0.0.2", 00:21:53.354 "adrfam": "ipv4", 00:21:53.354 "trsvcid": "4420", 00:21:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.354 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:53.354 "hostaddr": "10.0.0.1", 00:21:53.354 "prchk_reftag": false, 00:21:53.354 "prchk_guard": false, 00:21:53.354 "hdgst": false, 00:21:53.354 "ddgst": false, 00:21:53.354 "allow_unrecognized_csi": false, 00:21:53.354 "method": "bdev_nvme_attach_controller", 00:21:53.354 "req_id": 1 00:21:53.354 } 00:21:53.354 Got JSON-RPC error response 00:21:53.354 response: 00:21:53.354 { 00:21:53.354 "code": -114, 00:21:53.354 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.354 } 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.354 request: 00:21:53.354 { 00:21:53.354 "name": "NVMe0", 00:21:53.354 "trtype": "tcp", 00:21:53.354 "traddr": "10.0.0.2", 00:21:53.354 "adrfam": "ipv4", 00:21:53.354 "trsvcid": "4420", 00:21:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.354 "hostaddr": "10.0.0.1", 00:21:53.354 "prchk_reftag": false, 00:21:53.354 "prchk_guard": false, 00:21:53.354 "hdgst": false, 00:21:53.354 "ddgst": false, 00:21:53.354 "allow_unrecognized_csi": false, 00:21:53.354 "method": "bdev_nvme_attach_controller", 00:21:53.354 "req_id": 1 00:21:53.354 } 00:21:53.354 Got JSON-RPC error response 00:21:53.354 response: 00:21:53.354 { 00:21:53.354 "code": -114, 00:21:53.354 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.354 } 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.354 request: 00:21:53.354 { 00:21:53.354 "name": "NVMe0", 00:21:53.354 "trtype": "tcp", 00:21:53.354 "traddr": "10.0.0.2", 00:21:53.354 "adrfam": "ipv4", 00:21:53.354 "trsvcid": "4420", 00:21:53.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.354 "hostaddr": "10.0.0.1", 00:21:53.354 "prchk_reftag": false, 00:21:53.354 "prchk_guard": false, 00:21:53.354 "hdgst": false, 00:21:53.354 "ddgst": false, 00:21:53.354 "multipath": "disable", 00:21:53.354 "allow_unrecognized_csi": false, 00:21:53.354 "method": "bdev_nvme_attach_controller", 00:21:53.354 "req_id": 1 00:21:53.354 } 00:21:53.354 Got JSON-RPC error response 00:21:53.354 response: 00:21:53.354 { 00:21:53.354 "code": -114, 00:21:53.354 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:53.354 } 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.354 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:53.355 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.355 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:53.355 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.355 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:53.612 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.612 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:53.612 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.612 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.612 request: 00:21:53.612 { 00:21:53.612 "name": "NVMe0", 00:21:53.612 "trtype": "tcp", 00:21:53.612 "traddr": "10.0.0.2", 00:21:53.612 "adrfam": "ipv4", 00:21:53.612 "trsvcid": "4420", 00:21:53.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.612 "hostaddr": "10.0.0.1", 00:21:53.612 "prchk_reftag": false, 00:21:53.612 "prchk_guard": false, 00:21:53.612 "hdgst": false, 00:21:53.612 "ddgst": false, 00:21:53.612 "multipath": "failover", 00:21:53.612 "allow_unrecognized_csi": false, 00:21:53.612 "method": "bdev_nvme_attach_controller", 00:21:53.612 "req_id": 1 00:21:53.612 } 00:21:53.612 Got JSON-RPC error response 00:21:53.612 response: 00:21:53.613 { 00:21:53.613 "code": -114, 00:21:53.613 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:53.613 } 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.613 NVMe0n1 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.613 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:53.613 15:02:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.985 { 00:21:54.985 "results": [ 00:21:54.985 { 00:21:54.985 "job": "NVMe0n1", 00:21:54.985 "core_mask": "0x1", 00:21:54.985 "workload": "write", 00:21:54.985 "status": "finished", 00:21:54.985 "queue_depth": 128, 00:21:54.985 "io_size": 4096, 00:21:54.985 "runtime": 1.005056, 00:21:54.985 "iops": 24409.585137544575, 00:21:54.985 "mibps": 95.3499419435335, 00:21:54.985 "io_failed": 0, 00:21:54.985 "io_timeout": 0, 00:21:54.985 "avg_latency_us": 5232.954227331775, 00:21:54.985 "min_latency_us": 3120.0834782608695, 00:21:54.985 "max_latency_us": 11625.51652173913 00:21:54.985 } 00:21:54.985 ], 00:21:54.985 "core_count": 1 00:21:54.985 } 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3181643 ']' 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181643' 00:21:54.985 killing process with pid 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3181643 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt -type f 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:54.985 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:21:54.985 [2024-12-11 15:02:45.873716] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:21:54.985 [2024-12-11 15:02:45.873764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181643 ] 00:21:54.985 [2024-12-11 15:02:45.950974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.985 [2024-12-11 15:02:45.993583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:54.985 [2024-12-11 15:02:46.567532] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name f64b8fe8-0630-4612-a8ed-3b04a1c98752 already exists 00:21:54.985 [2024-12-11 15:02:46.567562] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:f64b8fe8-0630-4612-a8ed-3b04a1c98752 alias for bdev NVMe1n1 00:21:54.985 [2024-12-11 15:02:46.567570] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:54.985 Running I/O for 1 seconds... 00:21:54.985 24341.00 IOPS, 95.08 MiB/s 00:21:54.985 Latency(us) 00:21:54.985 [2024-12-11T14:02:48.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.985 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:54.985 NVMe0n1 : 1.01 24409.59 95.35 0.00 0.00 5232.95 3120.08 11625.52 00:21:54.985 [2024-12-11T14:02:48.033Z] =================================================================================================================== 00:21:54.985 [2024-12-11T14:02:48.033Z] Total : 24409.59 95.35 0.00 0.00 5232.95 3120.08 11625.52 00:21:54.985 Received shutdown signal, test time was about 1.000000 seconds 00:21:54.985 00:21:54.985 Latency(us) 00:21:54.985 [2024-12-11T14:02:48.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:54.985 [2024-12-11T14:02:48.033Z] =================================================================================================================== 00:21:54.985 [2024-12-11T14:02:48.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:54.985 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:54.985 15:02:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:54.985 rmmod nvme_tcp 00:21:54.985 rmmod nvme_fabrics 00:21:54.985 rmmod nvme_keyring 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3181527 ']' 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3181527 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3181527 ']' 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3181527 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181527 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181527' 00:21:55.244 killing process with pid 3181527 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3181527 00:21:55.244 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3181527 00:21:55.503 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.504 15:02:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:57.409 00:21:57.409 real 0m11.201s 00:21:57.409 user 0m12.127s 00:21:57.409 sys 0m5.230s 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.409 ************************************ 00:21:57.409 END TEST nvmf_multicontroller 00:21:57.409 ************************************ 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.409 ************************************ 00:21:57.409 START TEST nvmf_aer 00:21:57.409 ************************************ 00:21:57.409 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.669 * Looking for test storage... 00:21:57.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:57.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.669 --rc genhtml_branch_coverage=1 00:21:57.669 --rc genhtml_function_coverage=1 00:21:57.669 --rc genhtml_legend=1 00:21:57.669 --rc geninfo_all_blocks=1 00:21:57.669 --rc geninfo_unexecuted_blocks=1 00:21:57.669 00:21:57.669 ' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:57.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.669 --rc genhtml_branch_coverage=1 00:21:57.669 --rc genhtml_function_coverage=1 00:21:57.669 --rc genhtml_legend=1 00:21:57.669 --rc geninfo_all_blocks=1 00:21:57.669 --rc geninfo_unexecuted_blocks=1 00:21:57.669 00:21:57.669 ' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:57.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.669 --rc genhtml_branch_coverage=1 00:21:57.669 --rc genhtml_function_coverage=1 00:21:57.669 --rc genhtml_legend=1 00:21:57.669 --rc geninfo_all_blocks=1 00:21:57.669 --rc geninfo_unexecuted_blocks=1 00:21:57.669 00:21:57.669 ' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:57.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:57.669 --rc genhtml_branch_coverage=1 00:21:57.669 --rc genhtml_function_coverage=1 00:21:57.669 --rc genhtml_legend=1 00:21:57.669 --rc geninfo_all_blocks=1 00:21:57.669 --rc geninfo_unexecuted_blocks=1 00:21:57.669 00:21:57.669 ' 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.669 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:57.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:57.670 15:02:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:04.240 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:04.240 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:04.240 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:04.241 Found net devices under 0000:86:00.0: cvl_0_0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:04.241 Found net devices under 0000:86:00.1: cvl_0_1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:04.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:22:04.241 00:22:04.241 --- 10.0.0.2 ping statistics --- 00:22:04.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.241 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:22:04.241 00:22:04.241 --- 10.0.0.1 ping statistics --- 00:22:04.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.241 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3185416 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3185416 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3185416 ']' 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 [2024-12-11 15:02:56.637110] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:04.241 [2024-12-11 15:02:56.637174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.241 [2024-12-11 15:02:56.720099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.241 [2024-12-11 15:02:56.764319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.241 [2024-12-11 15:02:56.764352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.241 [2024-12-11 15:02:56.764362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.241 [2024-12-11 15:02:56.764371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.241 [2024-12-11 15:02:56.764377] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.241 [2024-12-11 15:02:56.766053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.241 [2024-12-11 15:02:56.766204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:04.241 [2024-12-11 15:02:56.766247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:04.241 [2024-12-11 15:02:56.766248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 [2024-12-11 15:02:56.905499] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 Malloc0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.241 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 [2024-12-11 15:02:56.970269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 [ 00:22:04.242 { 00:22:04.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.242 "subtype": "Discovery", 00:22:04.242 "listen_addresses": [], 00:22:04.242 "allow_any_host": true, 00:22:04.242 "hosts": [] 00:22:04.242 }, 00:22:04.242 { 00:22:04.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.242 "subtype": "NVMe", 00:22:04.242 "listen_addresses": [ 00:22:04.242 { 00:22:04.242 "trtype": "TCP", 00:22:04.242 "adrfam": "IPv4", 00:22:04.242 "traddr": "10.0.0.2", 00:22:04.242 "trsvcid": "4420" 00:22:04.242 } 00:22:04.242 ], 00:22:04.242 "allow_any_host": true, 00:22:04.242 "hosts": [], 00:22:04.242 "serial_number": "SPDK00000000000001", 00:22:04.242 "model_number": "SPDK bdev Controller", 00:22:04.242 "max_namespaces": 2, 00:22:04.242 "min_cntlid": 1, 00:22:04.242 "max_cntlid": 65519, 00:22:04.242 "namespaces": [ 00:22:04.242 { 00:22:04.242 "nsid": 1, 00:22:04.242 "bdev_name": "Malloc0", 00:22:04.242 "name": "Malloc0", 00:22:04.242 "nguid": "C316D104BC0C4289A17EA0BDD463C494", 00:22:04.242 "uuid": "c316d104-bc0c-4289-a17e-a0bdd463c494" 00:22:04.242 } 00:22:04.242 ] 00:22:04.242 } 00:22:04.242 ] 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3185635 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:04.242 15:02:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 Malloc1 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.242 Asynchronous Event Request test 00:22:04.242 Attaching to 10.0.0.2 00:22:04.242 Attached to 10.0.0.2 00:22:04.242 Registering asynchronous event callbacks... 00:22:04.242 Starting namespace attribute notice tests for all controllers... 00:22:04.242 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:04.242 aer_cb - Changed Namespace 00:22:04.242 Cleaning up... 00:22:04.242 [ 00:22:04.242 { 00:22:04.242 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.242 "subtype": "Discovery", 00:22:04.242 "listen_addresses": [], 00:22:04.242 "allow_any_host": true, 00:22:04.242 "hosts": [] 00:22:04.242 }, 00:22:04.242 { 00:22:04.242 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.242 "subtype": "NVMe", 00:22:04.242 "listen_addresses": [ 00:22:04.242 { 00:22:04.242 "trtype": "TCP", 00:22:04.242 "adrfam": "IPv4", 00:22:04.242 "traddr": "10.0.0.2", 00:22:04.242 "trsvcid": "4420" 00:22:04.242 } 00:22:04.242 ], 00:22:04.242 "allow_any_host": true, 00:22:04.242 "hosts": [], 00:22:04.242 "serial_number": "SPDK00000000000001", 00:22:04.242 "model_number": "SPDK bdev Controller", 00:22:04.242 "max_namespaces": 2, 00:22:04.242 "min_cntlid": 1, 00:22:04.242 "max_cntlid": 65519, 00:22:04.242 "namespaces": [ 00:22:04.242 { 00:22:04.242 "nsid": 1, 00:22:04.242 "bdev_name": "Malloc0", 00:22:04.242 "name": "Malloc0", 00:22:04.242 "nguid": "C316D104BC0C4289A17EA0BDD463C494", 00:22:04.242 "uuid": "c316d104-bc0c-4289-a17e-a0bdd463c494" 00:22:04.242 }, 00:22:04.242 { 00:22:04.242 "nsid": 2, 00:22:04.242 "bdev_name": "Malloc1", 00:22:04.242 "name": "Malloc1", 00:22:04.242 "nguid": "88D43B9058A2484794393C29BAF56892", 00:22:04.242 "uuid": "88d43b90-58a2-4847-9439-3c29baf56892" 00:22:04.242 } 00:22:04.242 ] 00:22:04.242 } 00:22:04.242 ] 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3185635 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.242 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:04.500 rmmod nvme_tcp 00:22:04.500 rmmod nvme_fabrics 00:22:04.500 rmmod nvme_keyring 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3185416 ']' 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3185416 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3185416 ']' 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3185416 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3185416 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3185416' 00:22:04.500 killing process with pid 3185416 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3185416 00:22:04.500 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3185416 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.759 15:02:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.666 15:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.666 00:22:06.666 real 0m9.243s 00:22:06.666 user 0m5.151s 00:22:06.666 sys 0m4.873s 00:22:06.666 15:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.666 15:02:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.666 ************************************ 00:22:06.666 END TEST nvmf_aer 00:22:06.666 ************************************ 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.925 ************************************ 00:22:06.925 START TEST nvmf_async_init 00:22:06.925 ************************************ 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:06.925 * Looking for test storage... 00:22:06.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:06.925 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.926 --rc genhtml_branch_coverage=1 00:22:06.926 --rc genhtml_function_coverage=1 00:22:06.926 --rc genhtml_legend=1 00:22:06.926 --rc geninfo_all_blocks=1 00:22:06.926 --rc geninfo_unexecuted_blocks=1 00:22:06.926 00:22:06.926 ' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.926 --rc genhtml_branch_coverage=1 00:22:06.926 --rc genhtml_function_coverage=1 00:22:06.926 --rc genhtml_legend=1 00:22:06.926 --rc geninfo_all_blocks=1 00:22:06.926 --rc geninfo_unexecuted_blocks=1 00:22:06.926 00:22:06.926 ' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.926 --rc genhtml_branch_coverage=1 00:22:06.926 --rc genhtml_function_coverage=1 00:22:06.926 --rc genhtml_legend=1 00:22:06.926 --rc geninfo_all_blocks=1 00:22:06.926 --rc geninfo_unexecuted_blocks=1 00:22:06.926 00:22:06.926 ' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.926 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.926 --rc genhtml_branch_coverage=1 00:22:06.926 --rc genhtml_function_coverage=1 00:22:06.926 --rc genhtml_legend=1 00:22:06.926 --rc geninfo_all_blocks=1 00:22:06.926 --rc geninfo_unexecuted_blocks=1 00:22:06.926 00:22:06.926 ' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:06.926 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=1bf61639dc334c02a951db344d23b0f0 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.185 15:02:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:13.757 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:13.757 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:13.757 Found net devices under 0000:86:00.0: cvl_0_0 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:13.757 Found net devices under 0000:86:00.1: cvl_0_1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:13.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:22:13.757 00:22:13.757 --- 10.0.0.2 ping statistics --- 00:22:13.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.757 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:22:13.757 00:22:13.757 --- 10.0.0.1 ping statistics --- 00:22:13.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.757 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:13.757 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3189323 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3189323 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3189323 ']' 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.758 15:03:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 [2024-12-11 15:03:05.945257] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:13.758 [2024-12-11 15:03:05.945314] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.758 [2024-12-11 15:03:06.024778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.758 [2024-12-11 15:03:06.065062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.758 [2024-12-11 15:03:06.065101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.758 [2024-12-11 15:03:06.065111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.758 [2024-12-11 15:03:06.065120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.758 [2024-12-11 15:03:06.065126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.758 [2024-12-11 15:03:06.065752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 [2024-12-11 15:03:06.201824] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 null0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 1bf61639dc334c02a951db344d23b0f0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 [2024-12-11 15:03:06.254134] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 nvme0n1 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 [ 00:22:13.758 { 00:22:13.758 "name": "nvme0n1", 00:22:13.758 "aliases": [ 00:22:13.758 "1bf61639-dc33-4c02-a951-db344d23b0f0" 00:22:13.758 ], 00:22:13.758 "product_name": "NVMe disk", 00:22:13.758 "block_size": 512, 00:22:13.758 "num_blocks": 2097152, 00:22:13.758 "uuid": "1bf61639-dc33-4c02-a951-db344d23b0f0", 00:22:13.758 "numa_id": 1, 00:22:13.758 "assigned_rate_limits": { 00:22:13.758 "rw_ios_per_sec": 0, 00:22:13.758 "rw_mbytes_per_sec": 0, 00:22:13.758 "r_mbytes_per_sec": 0, 00:22:13.758 "w_mbytes_per_sec": 0 00:22:13.758 }, 00:22:13.758 "claimed": false, 00:22:13.758 "zoned": false, 00:22:13.758 "supported_io_types": { 00:22:13.758 "read": true, 00:22:13.758 "write": true, 00:22:13.758 "unmap": false, 00:22:13.758 "flush": true, 00:22:13.758 "reset": true, 00:22:13.758 "nvme_admin": true, 00:22:13.758 "nvme_io": true, 00:22:13.758 "nvme_io_md": false, 00:22:13.758 "write_zeroes": true, 00:22:13.758 "zcopy": false, 00:22:13.758 "get_zone_info": false, 00:22:13.758 "zone_management": false, 00:22:13.758 "zone_append": false, 00:22:13.758 "compare": true, 00:22:13.758 "compare_and_write": true, 00:22:13.758 "abort": true, 00:22:13.758 "seek_hole": false, 00:22:13.758 "seek_data": false, 00:22:13.758 "copy": true, 00:22:13.758 "nvme_iov_md": false 00:22:13.758 }, 00:22:13.758 "memory_domains": [ 00:22:13.758 { 00:22:13.758 "dma_device_id": "system", 00:22:13.758 "dma_device_type": 1 00:22:13.758 } 00:22:13.758 ], 00:22:13.758 "driver_specific": { 00:22:13.758 "nvme": [ 00:22:13.758 { 00:22:13.758 "trid": { 00:22:13.758 "trtype": "TCP", 00:22:13.758 "adrfam": "IPv4", 00:22:13.758 "traddr": "10.0.0.2", 00:22:13.758 "trsvcid": "4420", 00:22:13.758 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.758 }, 00:22:13.758 "ctrlr_data": { 00:22:13.758 "cntlid": 1, 00:22:13.758 "vendor_id": "0x8086", 00:22:13.758 "model_number": "SPDK bdev Controller", 00:22:13.758 "serial_number": "00000000000000000000", 00:22:13.758 "firmware_revision": "25.01", 00:22:13.758 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.758 "oacs": { 00:22:13.758 "security": 0, 00:22:13.758 "format": 0, 00:22:13.758 "firmware": 0, 00:22:13.758 "ns_manage": 0 00:22:13.758 }, 00:22:13.758 "multi_ctrlr": true, 00:22:13.758 "ana_reporting": false 00:22:13.758 }, 00:22:13.758 "vs": { 00:22:13.758 "nvme_version": "1.3" 00:22:13.758 }, 00:22:13.758 "ns_data": { 00:22:13.758 "id": 1, 00:22:13.758 "can_share": true 00:22:13.758 } 00:22:13.758 } 00:22:13.758 ], 00:22:13.758 "mp_policy": "active_passive" 00:22:13.758 } 00:22:13.758 } 00:22:13.758 ] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.758 [2024-12-11 15:03:06.510669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:13.758 [2024-12-11 15:03:06.510731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9ac40 (9): Bad file descriptor 00:22:13.758 [2024-12-11 15:03:06.642239] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:13.758 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 [ 00:22:13.759 { 00:22:13.759 "name": "nvme0n1", 00:22:13.759 "aliases": [ 00:22:13.759 "1bf61639-dc33-4c02-a951-db344d23b0f0" 00:22:13.759 ], 00:22:13.759 "product_name": "NVMe disk", 00:22:13.759 "block_size": 512, 00:22:13.759 "num_blocks": 2097152, 00:22:13.759 "uuid": "1bf61639-dc33-4c02-a951-db344d23b0f0", 00:22:13.759 "numa_id": 1, 00:22:13.759 "assigned_rate_limits": { 00:22:13.759 "rw_ios_per_sec": 0, 00:22:13.759 "rw_mbytes_per_sec": 0, 00:22:13.759 "r_mbytes_per_sec": 0, 00:22:13.759 "w_mbytes_per_sec": 0 00:22:13.759 }, 00:22:13.759 "claimed": false, 00:22:13.759 "zoned": false, 00:22:13.759 "supported_io_types": { 00:22:13.759 "read": true, 00:22:13.759 "write": true, 00:22:13.759 "unmap": false, 00:22:13.759 "flush": true, 00:22:13.759 "reset": true, 00:22:13.759 "nvme_admin": true, 00:22:13.759 "nvme_io": true, 00:22:13.759 "nvme_io_md": false, 00:22:13.759 "write_zeroes": true, 00:22:13.759 "zcopy": false, 00:22:13.759 "get_zone_info": false, 00:22:13.759 "zone_management": false, 00:22:13.759 "zone_append": false, 00:22:13.759 "compare": true, 00:22:13.759 "compare_and_write": true, 00:22:13.759 "abort": true, 00:22:13.759 "seek_hole": false, 00:22:13.759 "seek_data": false, 00:22:13.759 "copy": true, 00:22:13.759 "nvme_iov_md": false 00:22:13.759 }, 00:22:13.759 "memory_domains": [ 00:22:13.759 { 00:22:13.759 "dma_device_id": "system", 00:22:13.759 "dma_device_type": 1 00:22:13.759 } 00:22:13.759 ], 00:22:13.759 "driver_specific": { 00:22:13.759 "nvme": [ 00:22:13.759 { 00:22:13.759 "trid": { 00:22:13.759 "trtype": "TCP", 00:22:13.759 "adrfam": "IPv4", 00:22:13.759 "traddr": "10.0.0.2", 00:22:13.759 "trsvcid": "4420", 00:22:13.759 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.759 }, 00:22:13.759 "ctrlr_data": { 00:22:13.759 "cntlid": 2, 00:22:13.759 "vendor_id": "0x8086", 00:22:13.759 "model_number": "SPDK bdev Controller", 00:22:13.759 "serial_number": "00000000000000000000", 00:22:13.759 "firmware_revision": "25.01", 00:22:13.759 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.759 "oacs": { 00:22:13.759 "security": 0, 00:22:13.759 "format": 0, 00:22:13.759 "firmware": 0, 00:22:13.759 "ns_manage": 0 00:22:13.759 }, 00:22:13.759 "multi_ctrlr": true, 00:22:13.759 "ana_reporting": false 00:22:13.759 }, 00:22:13.759 "vs": { 00:22:13.759 "nvme_version": "1.3" 00:22:13.759 }, 00:22:13.759 "ns_data": { 00:22:13.759 "id": 1, 00:22:13.759 "can_share": true 00:22:13.759 } 00:22:13.759 } 00:22:13.759 ], 00:22:13.759 "mp_policy": "active_passive" 00:22:13.759 } 00:22:13.759 } 00:22:13.759 ] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.27C7ul8GQo 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.27C7ul8GQo 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.27C7ul8GQo 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 [2024-12-11 15:03:06.715289] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.759 [2024-12-11 15:03:06.715414] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.759 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.759 [2024-12-11 15:03:06.735355] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.018 nvme0n1 00:22:14.018 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.018 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.018 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.018 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.018 [ 00:22:14.018 { 00:22:14.018 "name": "nvme0n1", 00:22:14.019 "aliases": [ 00:22:14.019 "1bf61639-dc33-4c02-a951-db344d23b0f0" 00:22:14.019 ], 00:22:14.019 "product_name": "NVMe disk", 00:22:14.019 "block_size": 512, 00:22:14.019 "num_blocks": 2097152, 00:22:14.019 "uuid": "1bf61639-dc33-4c02-a951-db344d23b0f0", 00:22:14.019 "numa_id": 1, 00:22:14.019 "assigned_rate_limits": { 00:22:14.019 "rw_ios_per_sec": 0, 00:22:14.019 "rw_mbytes_per_sec": 0, 00:22:14.019 "r_mbytes_per_sec": 0, 00:22:14.019 "w_mbytes_per_sec": 0 00:22:14.019 }, 00:22:14.019 "claimed": false, 00:22:14.019 "zoned": false, 00:22:14.019 "supported_io_types": { 00:22:14.019 "read": true, 00:22:14.019 "write": true, 00:22:14.019 "unmap": false, 00:22:14.019 "flush": true, 00:22:14.019 "reset": true, 00:22:14.019 "nvme_admin": true, 00:22:14.019 "nvme_io": true, 00:22:14.019 "nvme_io_md": false, 00:22:14.019 "write_zeroes": true, 00:22:14.019 "zcopy": false, 00:22:14.019 "get_zone_info": false, 00:22:14.019 "zone_management": false, 00:22:14.019 "zone_append": false, 00:22:14.019 "compare": true, 00:22:14.019 "compare_and_write": true, 00:22:14.019 "abort": true, 00:22:14.019 "seek_hole": false, 00:22:14.019 "seek_data": false, 00:22:14.019 "copy": true, 00:22:14.019 "nvme_iov_md": false 00:22:14.019 }, 00:22:14.019 "memory_domains": [ 00:22:14.019 { 00:22:14.019 "dma_device_id": "system", 00:22:14.019 "dma_device_type": 1 00:22:14.019 } 00:22:14.019 ], 00:22:14.019 "driver_specific": { 00:22:14.019 "nvme": [ 00:22:14.019 { 00:22:14.019 "trid": { 00:22:14.019 "trtype": "TCP", 00:22:14.019 "adrfam": "IPv4", 00:22:14.019 "traddr": "10.0.0.2", 00:22:14.019 "trsvcid": "4421", 00:22:14.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.019 }, 00:22:14.019 "ctrlr_data": { 00:22:14.019 "cntlid": 3, 00:22:14.019 "vendor_id": "0x8086", 00:22:14.019 "model_number": "SPDK bdev Controller", 00:22:14.019 "serial_number": "00000000000000000000", 00:22:14.019 "firmware_revision": "25.01", 00:22:14.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.019 "oacs": { 00:22:14.019 "security": 0, 00:22:14.019 "format": 0, 00:22:14.019 "firmware": 0, 00:22:14.019 "ns_manage": 0 00:22:14.019 }, 00:22:14.019 "multi_ctrlr": true, 00:22:14.019 "ana_reporting": false 00:22:14.019 }, 00:22:14.019 "vs": { 00:22:14.019 "nvme_version": "1.3" 00:22:14.019 }, 00:22:14.019 "ns_data": { 00:22:14.019 "id": 1, 00:22:14.019 "can_share": true 00:22:14.019 } 00:22:14.019 } 00:22:14.019 ], 00:22:14.019 "mp_policy": "active_passive" 00:22:14.019 } 00:22:14.019 } 00:22:14.019 ] 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.27C7ul8GQo 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.019 rmmod nvme_tcp 00:22:14.019 rmmod nvme_fabrics 00:22:14.019 rmmod nvme_keyring 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3189323 ']' 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3189323 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3189323 ']' 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3189323 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3189323 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3189323' 00:22:14.019 killing process with pid 3189323 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3189323 00:22:14.019 15:03:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3189323 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.278 15:03:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:16.184 00:22:16.184 real 0m9.430s 00:22:16.184 user 0m3.058s 00:22:16.184 sys 0m4.814s 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.184 ************************************ 00:22:16.184 END TEST nvmf_async_init 00:22:16.184 ************************************ 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.184 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.444 ************************************ 00:22:16.444 START TEST dma 00:22:16.444 ************************************ 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.444 * Looking for test storage... 00:22:16.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.444 --rc genhtml_branch_coverage=1 00:22:16.444 --rc genhtml_function_coverage=1 00:22:16.444 --rc genhtml_legend=1 00:22:16.444 --rc geninfo_all_blocks=1 00:22:16.444 --rc geninfo_unexecuted_blocks=1 00:22:16.444 00:22:16.444 ' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.444 --rc genhtml_branch_coverage=1 00:22:16.444 --rc genhtml_function_coverage=1 00:22:16.444 --rc genhtml_legend=1 00:22:16.444 --rc geninfo_all_blocks=1 00:22:16.444 --rc geninfo_unexecuted_blocks=1 00:22:16.444 00:22:16.444 ' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.444 --rc genhtml_branch_coverage=1 00:22:16.444 --rc genhtml_function_coverage=1 00:22:16.444 --rc genhtml_legend=1 00:22:16.444 --rc geninfo_all_blocks=1 00:22:16.444 --rc geninfo_unexecuted_blocks=1 00:22:16.444 00:22:16.444 ' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.444 --rc genhtml_branch_coverage=1 00:22:16.444 --rc genhtml_function_coverage=1 00:22:16.444 --rc genhtml_legend=1 00:22:16.444 --rc geninfo_all_blocks=1 00:22:16.444 --rc geninfo_unexecuted_blocks=1 00:22:16.444 00:22:16.444 ' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.444 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:16.445 00:22:16.445 real 0m0.210s 00:22:16.445 user 0m0.130s 00:22:16.445 sys 0m0.093s 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:16.445 15:03:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:16.445 ************************************ 00:22:16.445 END TEST dma 00:22:16.445 ************************************ 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.704 ************************************ 00:22:16.704 START TEST nvmf_identify 00:22:16.704 ************************************ 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.704 * Looking for test storage... 00:22:16.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:16.704 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:16.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.705 --rc genhtml_branch_coverage=1 00:22:16.705 --rc genhtml_function_coverage=1 00:22:16.705 --rc genhtml_legend=1 00:22:16.705 --rc geninfo_all_blocks=1 00:22:16.705 --rc geninfo_unexecuted_blocks=1 00:22:16.705 00:22:16.705 ' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:16.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.705 --rc genhtml_branch_coverage=1 00:22:16.705 --rc genhtml_function_coverage=1 00:22:16.705 --rc genhtml_legend=1 00:22:16.705 --rc geninfo_all_blocks=1 00:22:16.705 --rc geninfo_unexecuted_blocks=1 00:22:16.705 00:22:16.705 ' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:16.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.705 --rc genhtml_branch_coverage=1 00:22:16.705 --rc genhtml_function_coverage=1 00:22:16.705 --rc genhtml_legend=1 00:22:16.705 --rc geninfo_all_blocks=1 00:22:16.705 --rc geninfo_unexecuted_blocks=1 00:22:16.705 00:22:16.705 ' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:16.705 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.705 --rc genhtml_branch_coverage=1 00:22:16.705 --rc genhtml_function_coverage=1 00:22:16.705 --rc genhtml_legend=1 00:22:16.705 --rc geninfo_all_blocks=1 00:22:16.705 --rc geninfo_unexecuted_blocks=1 00:22:16.705 00:22:16.705 ' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:16.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:16.705 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:16.964 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.964 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.964 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:16.964 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:22:16.965 15:03:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.536 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:23.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:23.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:23.537 Found net devices under 0000:86:00.0: cvl_0_0 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:23.537 Found net devices under 0000:86:00.1: cvl_0_1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:22:23.537 00:22:23.537 --- 10.0.0.2 ping statistics --- 00:22:23.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.537 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:23.537 00:22:23.537 --- 10.0.0.1 ping statistics --- 00:22:23.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.537 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3193494 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3193494 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3193494 ']' 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.537 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.537 [2024-12-11 15:03:15.707648] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:23.537 [2024-12-11 15:03:15.707689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.538 [2024-12-11 15:03:15.770812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.538 [2024-12-11 15:03:15.814442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.538 [2024-12-11 15:03:15.814478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.538 [2024-12-11 15:03:15.814487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.538 [2024-12-11 15:03:15.814495] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.538 [2024-12-11 15:03:15.814501] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.538 [2024-12-11 15:03:15.819175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.538 [2024-12-11 15:03:15.819213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.538 [2024-12-11 15:03:15.819319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.538 [2024-12-11 15:03:15.819320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 [2024-12-11 15:03:15.920957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 Malloc0 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 [2024-12-11 15:03:16.031985] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.538 [ 00:22:23.538 { 00:22:23.538 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:23.538 "subtype": "Discovery", 00:22:23.538 "listen_addresses": [ 00:22:23.538 { 00:22:23.538 "trtype": "TCP", 00:22:23.538 "adrfam": "IPv4", 00:22:23.538 "traddr": "10.0.0.2", 00:22:23.538 "trsvcid": "4420" 00:22:23.538 } 00:22:23.538 ], 00:22:23.538 "allow_any_host": true, 00:22:23.538 "hosts": [] 00:22:23.538 }, 00:22:23.538 { 00:22:23.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.538 "subtype": "NVMe", 00:22:23.538 "listen_addresses": [ 00:22:23.538 { 00:22:23.538 "trtype": "TCP", 00:22:23.538 "adrfam": "IPv4", 00:22:23.538 "traddr": "10.0.0.2", 00:22:23.538 "trsvcid": "4420" 00:22:23.538 } 00:22:23.538 ], 00:22:23.538 "allow_any_host": true, 00:22:23.538 "hosts": [], 00:22:23.538 "serial_number": "SPDK00000000000001", 00:22:23.538 "model_number": "SPDK bdev Controller", 00:22:23.538 "max_namespaces": 32, 00:22:23.538 "min_cntlid": 1, 00:22:23.538 "max_cntlid": 65519, 00:22:23.538 "namespaces": [ 00:22:23.538 { 00:22:23.538 "nsid": 1, 00:22:23.538 "bdev_name": "Malloc0", 00:22:23.538 "name": "Malloc0", 00:22:23.538 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:23.538 "eui64": "ABCDEF0123456789", 00:22:23.538 "uuid": "acd0b1f7-857f-459b-a226-b0c710dd57be" 00:22:23.538 } 00:22:23.538 ] 00:22:23.538 } 00:22:23.538 ] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.538 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:23.538 [2024-12-11 15:03:16.088386] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:23.538 [2024-12-11 15:03:16.088419] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193564 ] 00:22:23.538 [2024-12-11 15:03:16.128057] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:23.538 [2024-12-11 15:03:16.128096] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.538 [2024-12-11 15:03:16.128101] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.538 [2024-12-11 15:03:16.128112] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.538 [2024-12-11 15:03:16.128120] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.538 [2024-12-11 15:03:16.132469] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:23.538 [2024-12-11 15:03:16.132506] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e7b690 0 00:22:23.538 [2024-12-11 15:03:16.132663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.538 [2024-12-11 15:03:16.132670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.538 [2024-12-11 15:03:16.132674] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.538 [2024-12-11 15:03:16.132677] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.538 [2024-12-11 15:03:16.132701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.132707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.132710] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.538 [2024-12-11 15:03:16.132721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.538 [2024-12-11 15:03:16.132733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.538 [2024-12-11 15:03:16.140169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.538 [2024-12-11 15:03:16.140178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.538 [2024-12-11 15:03:16.140181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.538 [2024-12-11 15:03:16.140194] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.538 [2024-12-11 15:03:16.140201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:23.538 [2024-12-11 15:03:16.140206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:23.538 [2024-12-11 15:03:16.140217] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.538 [2024-12-11 15:03:16.140231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-11 15:03:16.140245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.538 [2024-12-11 15:03:16.140411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.538 [2024-12-11 15:03:16.140417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.538 [2024-12-11 15:03:16.140420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.538 [2024-12-11 15:03:16.140428] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:23.538 [2024-12-11 15:03:16.140434] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:23.538 [2024-12-11 15:03:16.140441] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.538 [2024-12-11 15:03:16.140453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.538 [2024-12-11 15:03:16.140463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.538 [2024-12-11 15:03:16.140526] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.538 [2024-12-11 15:03:16.140532] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.538 [2024-12-11 15:03:16.140535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.538 [2024-12-11 15:03:16.140539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.140543] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:23.539 [2024-12-11 15:03:16.140550] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.140556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.140568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-11 15:03:16.140578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.140642] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.140647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.140650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140654] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.140658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.140666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.140679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-11 15:03:16.140688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.140751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.140758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.140761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.140769] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.539 [2024-12-11 15:03:16.140773] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.140780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.140888] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:23.539 [2024-12-11 15:03:16.140893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.140900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.140912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-11 15:03:16.140922] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.140986] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.140991] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.140994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.140998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.141002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.539 [2024-12-11 15:03:16.141011] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-11 15:03:16.141032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.141097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.141103] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.141106] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.141113] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.539 [2024-12-11 15:03:16.141118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.539 [2024-12-11 15:03:16.141124] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:23.539 [2024-12-11 15:03:16.141131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.539 [2024-12-11 15:03:16.141140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141144] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.539 [2024-12-11 15:03:16.141163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.141254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.539 [2024-12-11 15:03:16.141260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.539 [2024-12-11 15:03:16.141263] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141267] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7b690): datao=0, datal=4096, cccid=0 00:22:23.539 [2024-12-11 15:03:16.141271] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edd100) on tqpair(0x1e7b690): expected_datao=0, payload_size=4096 00:22:23.539 [2024-12-11 15:03:16.141275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141282] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141285] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141301] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.141306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.141309] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141313] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.141320] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:23.539 [2024-12-11 15:03:16.141324] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:23.539 [2024-12-11 15:03:16.141328] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:23.539 [2024-12-11 15:03:16.141333] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:23.539 [2024-12-11 15:03:16.141337] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:23.539 [2024-12-11 15:03:16.141341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:23.539 [2024-12-11 15:03:16.141352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.539 [2024-12-11 15:03:16.141359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.539 [2024-12-11 15:03:16.141383] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.539 [2024-12-11 15:03:16.141449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.539 [2024-12-11 15:03:16.141454] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.539 [2024-12-11 15:03:16.141458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.539 [2024-12-11 15:03:16.141467] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.539 [2024-12-11 15:03:16.141488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.539 [2024-12-11 15:03:16.141504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.539 [2024-12-11 15:03:16.141521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.539 [2024-12-11 15:03:16.141528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.539 [2024-12-11 15:03:16.141532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.539 [2024-12-11 15:03:16.141537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.540 [2024-12-11 15:03:16.141547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.540 [2024-12-11 15:03:16.141553] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.141562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-11 15:03:16.141573] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd100, cid 0, qid 0 00:22:23.540 [2024-12-11 15:03:16.141578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd280, cid 1, qid 0 00:22:23.540 [2024-12-11 15:03:16.141582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd400, cid 2, qid 0 00:22:23.540 [2024-12-11 15:03:16.141586] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.540 [2024-12-11 15:03:16.141590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd700, cid 4, qid 0 00:22:23.540 [2024-12-11 15:03:16.141684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.141689] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.141693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd700) on tqpair=0x1e7b690 00:22:23.540 [2024-12-11 15:03:16.141700] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:23.540 [2024-12-11 15:03:16.141705] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:23.540 [2024-12-11 15:03:16.141715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.141726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-11 15:03:16.141735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd700, cid 4, qid 0 00:22:23.540 [2024-12-11 15:03:16.141813] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.540 [2024-12-11 15:03:16.141819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.540 [2024-12-11 15:03:16.141822] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7b690): datao=0, datal=4096, cccid=4 00:22:23.540 [2024-12-11 15:03:16.141829] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edd700) on tqpair(0x1e7b690): expected_datao=0, payload_size=4096 00:22:23.540 [2024-12-11 15:03:16.141833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141839] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.141842] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.183289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.183293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd700) on tqpair=0x1e7b690 00:22:23.540 [2024-12-11 15:03:16.183309] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:23.540 [2024-12-11 15:03:16.183336] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.183348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-11 15:03:16.183354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.183366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.540 [2024-12-11 15:03:16.183381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd700, cid 4, qid 0 00:22:23.540 [2024-12-11 15:03:16.183386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd880, cid 5, qid 0 00:22:23.540 [2024-12-11 15:03:16.183487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.540 [2024-12-11 15:03:16.183493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.540 [2024-12-11 15:03:16.183496] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183499] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7b690): datao=0, datal=1024, cccid=4 00:22:23.540 [2024-12-11 15:03:16.183503] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edd700) on tqpair(0x1e7b690): expected_datao=0, payload_size=1024 00:22:23.540 [2024-12-11 15:03:16.183507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183513] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183517] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.183526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.183529] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.183533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd880) on tqpair=0x1e7b690 00:22:23.540 [2024-12-11 15:03:16.225303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.225314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.225318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd700) on tqpair=0x1e7b690 00:22:23.540 [2024-12-11 15:03:16.225337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.225348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-11 15:03:16.225366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd700, cid 4, qid 0 00:22:23.540 [2024-12-11 15:03:16.225450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.540 [2024-12-11 15:03:16.225456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.540 [2024-12-11 15:03:16.225459] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225462] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7b690): datao=0, datal=3072, cccid=4 00:22:23.540 [2024-12-11 15:03:16.225466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edd700) on tqpair(0x1e7b690): expected_datao=0, payload_size=3072 00:22:23.540 [2024-12-11 15:03:16.225470] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225476] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225480] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.225512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.225515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd700) on tqpair=0x1e7b690 00:22:23.540 [2024-12-11 15:03:16.225526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e7b690) 00:22:23.540 [2024-12-11 15:03:16.225535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.540 [2024-12-11 15:03:16.225549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd700, cid 4, qid 0 00:22:23.540 [2024-12-11 15:03:16.225621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.540 [2024-12-11 15:03:16.225627] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.540 [2024-12-11 15:03:16.225630] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225633] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e7b690): datao=0, datal=8, cccid=4 00:22:23.540 [2024-12-11 15:03:16.225637] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1edd700) on tqpair(0x1e7b690): expected_datao=0, payload_size=8 00:22:23.540 [2024-12-11 15:03:16.225641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225646] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.225650] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.267307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.540 [2024-12-11 15:03:16.267316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.540 [2024-12-11 15:03:16.267319] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.540 [2024-12-11 15:03:16.267323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd700) on tqpair=0x1e7b690 00:22:23.540 ===================================================== 00:22:23.540 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:23.540 ===================================================== 00:22:23.540 Controller Capabilities/Features 00:22:23.540 ================================ 00:22:23.540 Vendor ID: 0000 00:22:23.540 Subsystem Vendor ID: 0000 00:22:23.540 Serial Number: .................... 00:22:23.540 Model Number: ........................................ 00:22:23.540 Firmware Version: 25.01 00:22:23.540 Recommended Arb Burst: 0 00:22:23.540 IEEE OUI Identifier: 00 00 00 00:22:23.540 Multi-path I/O 00:22:23.540 May have multiple subsystem ports: No 00:22:23.540 May have multiple controllers: No 00:22:23.540 Associated with SR-IOV VF: No 00:22:23.540 Max Data Transfer Size: 131072 00:22:23.540 Max Number of Namespaces: 0 00:22:23.540 Max Number of I/O Queues: 1024 00:22:23.540 NVMe Specification Version (VS): 1.3 00:22:23.540 NVMe Specification Version (Identify): 1.3 00:22:23.540 Maximum Queue Entries: 128 00:22:23.540 Contiguous Queues Required: Yes 00:22:23.540 Arbitration Mechanisms Supported 00:22:23.540 Weighted Round Robin: Not Supported 00:22:23.540 Vendor Specific: Not Supported 00:22:23.540 Reset Timeout: 15000 ms 00:22:23.540 Doorbell Stride: 4 bytes 00:22:23.540 NVM Subsystem Reset: Not Supported 00:22:23.540 Command Sets Supported 00:22:23.540 NVM Command Set: Supported 00:22:23.540 Boot Partition: Not Supported 00:22:23.541 Memory Page Size Minimum: 4096 bytes 00:22:23.541 Memory Page Size Maximum: 4096 bytes 00:22:23.541 Persistent Memory Region: Not Supported 00:22:23.541 Optional Asynchronous Events Supported 00:22:23.541 Namespace Attribute Notices: Not Supported 00:22:23.541 Firmware Activation Notices: Not Supported 00:22:23.541 ANA Change Notices: Not Supported 00:22:23.541 PLE Aggregate Log Change Notices: Not Supported 00:22:23.541 LBA Status Info Alert Notices: Not Supported 00:22:23.541 EGE Aggregate Log Change Notices: Not Supported 00:22:23.541 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.541 Zone Descriptor Change Notices: Not Supported 00:22:23.541 Discovery Log Change Notices: Supported 00:22:23.541 Controller Attributes 00:22:23.541 128-bit Host Identifier: Not Supported 00:22:23.541 Non-Operational Permissive Mode: Not Supported 00:22:23.541 NVM Sets: Not Supported 00:22:23.541 Read Recovery Levels: Not Supported 00:22:23.541 Endurance Groups: Not Supported 00:22:23.541 Predictable Latency Mode: Not Supported 00:22:23.541 Traffic Based Keep ALive: Not Supported 00:22:23.541 Namespace Granularity: Not Supported 00:22:23.541 SQ Associations: Not Supported 00:22:23.541 UUID List: Not Supported 00:22:23.541 Multi-Domain Subsystem: Not Supported 00:22:23.541 Fixed Capacity Management: Not Supported 00:22:23.541 Variable Capacity Management: Not Supported 00:22:23.541 Delete Endurance Group: Not Supported 00:22:23.541 Delete NVM Set: Not Supported 00:22:23.541 Extended LBA Formats Supported: Not Supported 00:22:23.541 Flexible Data Placement Supported: Not Supported 00:22:23.541 00:22:23.541 Controller Memory Buffer Support 00:22:23.541 ================================ 00:22:23.541 Supported: No 00:22:23.541 00:22:23.541 Persistent Memory Region Support 00:22:23.541 ================================ 00:22:23.541 Supported: No 00:22:23.541 00:22:23.541 Admin Command Set Attributes 00:22:23.541 ============================ 00:22:23.541 Security Send/Receive: Not Supported 00:22:23.541 Format NVM: Not Supported 00:22:23.541 Firmware Activate/Download: Not Supported 00:22:23.541 Namespace Management: Not Supported 00:22:23.541 Device Self-Test: Not Supported 00:22:23.541 Directives: Not Supported 00:22:23.541 NVMe-MI: Not Supported 00:22:23.541 Virtualization Management: Not Supported 00:22:23.541 Doorbell Buffer Config: Not Supported 00:22:23.541 Get LBA Status Capability: Not Supported 00:22:23.541 Command & Feature Lockdown Capability: Not Supported 00:22:23.541 Abort Command Limit: 1 00:22:23.541 Async Event Request Limit: 4 00:22:23.541 Number of Firmware Slots: N/A 00:22:23.541 Firmware Slot 1 Read-Only: N/A 00:22:23.541 Firmware Activation Without Reset: N/A 00:22:23.541 Multiple Update Detection Support: N/A 00:22:23.541 Firmware Update Granularity: No Information Provided 00:22:23.541 Per-Namespace SMART Log: No 00:22:23.541 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.541 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:23.541 Command Effects Log Page: Not Supported 00:22:23.541 Get Log Page Extended Data: Supported 00:22:23.541 Telemetry Log Pages: Not Supported 00:22:23.541 Persistent Event Log Pages: Not Supported 00:22:23.541 Supported Log Pages Log Page: May Support 00:22:23.541 Commands Supported & Effects Log Page: Not Supported 00:22:23.541 Feature Identifiers & Effects Log Page:May Support 00:22:23.541 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.541 Data Area 4 for Telemetry Log: Not Supported 00:22:23.541 Error Log Page Entries Supported: 128 00:22:23.541 Keep Alive: Not Supported 00:22:23.541 00:22:23.541 NVM Command Set Attributes 00:22:23.541 ========================== 00:22:23.541 Submission Queue Entry Size 00:22:23.541 Max: 1 00:22:23.541 Min: 1 00:22:23.541 Completion Queue Entry Size 00:22:23.541 Max: 1 00:22:23.541 Min: 1 00:22:23.541 Number of Namespaces: 0 00:22:23.541 Compare Command: Not Supported 00:22:23.541 Write Uncorrectable Command: Not Supported 00:22:23.541 Dataset Management Command: Not Supported 00:22:23.541 Write Zeroes Command: Not Supported 00:22:23.541 Set Features Save Field: Not Supported 00:22:23.541 Reservations: Not Supported 00:22:23.541 Timestamp: Not Supported 00:22:23.541 Copy: Not Supported 00:22:23.541 Volatile Write Cache: Not Present 00:22:23.541 Atomic Write Unit (Normal): 1 00:22:23.541 Atomic Write Unit (PFail): 1 00:22:23.541 Atomic Compare & Write Unit: 1 00:22:23.541 Fused Compare & Write: Supported 00:22:23.541 Scatter-Gather List 00:22:23.541 SGL Command Set: Supported 00:22:23.541 SGL Keyed: Supported 00:22:23.541 SGL Bit Bucket Descriptor: Not Supported 00:22:23.541 SGL Metadata Pointer: Not Supported 00:22:23.541 Oversized SGL: Not Supported 00:22:23.541 SGL Metadata Address: Not Supported 00:22:23.541 SGL Offset: Supported 00:22:23.541 Transport SGL Data Block: Not Supported 00:22:23.541 Replay Protected Memory Block: Not Supported 00:22:23.541 00:22:23.541 Firmware Slot Information 00:22:23.541 ========================= 00:22:23.541 Active slot: 0 00:22:23.541 00:22:23.541 00:22:23.541 Error Log 00:22:23.541 ========= 00:22:23.541 00:22:23.541 Active Namespaces 00:22:23.541 ================= 00:22:23.541 Discovery Log Page 00:22:23.541 ================== 00:22:23.541 Generation Counter: 2 00:22:23.541 Number of Records: 2 00:22:23.541 Record Format: 0 00:22:23.541 00:22:23.541 Discovery Log Entry 0 00:22:23.541 ---------------------- 00:22:23.541 Transport Type: 3 (TCP) 00:22:23.541 Address Family: 1 (IPv4) 00:22:23.541 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:23.541 Entry Flags: 00:22:23.541 Duplicate Returned Information: 1 00:22:23.541 Explicit Persistent Connection Support for Discovery: 1 00:22:23.541 Transport Requirements: 00:22:23.541 Secure Channel: Not Required 00:22:23.541 Port ID: 0 (0x0000) 00:22:23.541 Controller ID: 65535 (0xffff) 00:22:23.541 Admin Max SQ Size: 128 00:22:23.541 Transport Service Identifier: 4420 00:22:23.541 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:23.541 Transport Address: 10.0.0.2 00:22:23.541 Discovery Log Entry 1 00:22:23.541 ---------------------- 00:22:23.541 Transport Type: 3 (TCP) 00:22:23.541 Address Family: 1 (IPv4) 00:22:23.541 Subsystem Type: 2 (NVM Subsystem) 00:22:23.541 Entry Flags: 00:22:23.541 Duplicate Returned Information: 0 00:22:23.541 Explicit Persistent Connection Support for Discovery: 0 00:22:23.541 Transport Requirements: 00:22:23.541 Secure Channel: Not Required 00:22:23.541 Port ID: 0 (0x0000) 00:22:23.541 Controller ID: 65535 (0xffff) 00:22:23.541 Admin Max SQ Size: 128 00:22:23.541 Transport Service Identifier: 4420 00:22:23.541 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:23.541 Transport Address: 10.0.0.2 [2024-12-11 15:03:16.267405] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:23.541 [2024-12-11 15:03:16.267418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd100) on tqpair=0x1e7b690 00:22:23.541 [2024-12-11 15:03:16.267424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.541 [2024-12-11 15:03:16.267428] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd280) on tqpair=0x1e7b690 00:22:23.541 [2024-12-11 15:03:16.267433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.541 [2024-12-11 15:03:16.267437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd400) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.542 [2024-12-11 15:03:16.267446] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.542 [2024-12-11 15:03:16.267457] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267461] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.267471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.267484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.267547] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.267553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.267556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.267578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.267590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.267659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.267665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.267668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267671] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267675] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:23.542 [2024-12-11 15:03:16.267679] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:23.542 [2024-12-11 15:03:16.267687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.267700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.267709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.267777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.267782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.267785] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267788] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267801] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.267809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.267819] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.267894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.267900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.267903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.267914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.267921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.267926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.267936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.268000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.268006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.268009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.268021] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.268033] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.268043] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.268109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.268115] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.268118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.268129] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.268136] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.268142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.268151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.272166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.272175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.272178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.272181] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.272190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.272194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.272197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e7b690) 00:22:23.542 [2024-12-11 15:03:16.272203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.542 [2024-12-11 15:03:16.272214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1edd580, cid 3, qid 0 00:22:23.542 [2024-12-11 15:03:16.272365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.272370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.272374] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.272377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1edd580) on tqpair=0x1e7b690 00:22:23.542 [2024-12-11 15:03:16.272383] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:22:23.542 00:22:23.542 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:23.542 [2024-12-11 15:03:16.310523] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:23.542 [2024-12-11 15:03:16.310555] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3193566 ] 00:22:23.542 [2024-12-11 15:03:16.350801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:23.542 [2024-12-11 15:03:16.350838] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:23.542 [2024-12-11 15:03:16.350843] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:23.542 [2024-12-11 15:03:16.350854] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:23.542 [2024-12-11 15:03:16.350861] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:23.542 [2024-12-11 15:03:16.354338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:23.542 [2024-12-11 15:03:16.354367] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc91690 0 00:22:23.542 [2024-12-11 15:03:16.362171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:23.542 [2024-12-11 15:03:16.362184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:23.542 [2024-12-11 15:03:16.362187] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:23.542 [2024-12-11 15:03:16.362191] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:23.542 [2024-12-11 15:03:16.362216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.362221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.362224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.542 [2024-12-11 15:03:16.362234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:23.542 [2024-12-11 15:03:16.362253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.542 [2024-12-11 15:03:16.370167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.542 [2024-12-11 15:03:16.370175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.542 [2024-12-11 15:03:16.370179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.542 [2024-12-11 15:03:16.370183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.542 [2024-12-11 15:03:16.370191] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:23.542 [2024-12-11 15:03:16.370197] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:23.542 [2024-12-11 15:03:16.370202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:23.543 [2024-12-11 15:03:16.370211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.370225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.370238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.370408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.370414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.370417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.370425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:23.543 [2024-12-11 15:03:16.370431] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:23.543 [2024-12-11 15:03:16.370438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370442] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.370451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.370461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.370555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.370561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.370564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370568] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.370572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:23.543 [2024-12-11 15:03:16.370579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.370585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.370598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.370608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.370670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.370676] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.370679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.370687] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.370696] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.370709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.370718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.370825] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.370830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.370834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370837] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.370841] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:23.543 [2024-12-11 15:03:16.370845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.370853] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.370961] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:23.543 [2024-12-11 15:03:16.370965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.370972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.370979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.370984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.370994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.371165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.371172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.371175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.371182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:23.543 [2024-12-11 15:03:16.371191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.371203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.371216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.371312] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.371318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.371321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371324] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.371328] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:23.543 [2024-12-11 15:03:16.371332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:23.543 [2024-12-11 15:03:16.371339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:23.543 [2024-12-11 15:03:16.371349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:23.543 [2024-12-11 15:03:16.371357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.371366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.543 [2024-12-11 15:03:16.371376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.371470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.543 [2024-12-11 15:03:16.371476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.543 [2024-12-11 15:03:16.371479] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=4096, cccid=0 00:22:23.543 [2024-12-11 15:03:16.371486] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3100) on tqpair(0xc91690): expected_datao=0, payload_size=4096 00:22:23.543 [2024-12-11 15:03:16.371490] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371515] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371520] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.371569] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.371572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.371582] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:23.543 [2024-12-11 15:03:16.371586] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:23.543 [2024-12-11 15:03:16.371590] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:23.543 [2024-12-11 15:03:16.371594] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:23.543 [2024-12-11 15:03:16.371598] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:23.543 [2024-12-11 15:03:16.371603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:23.543 [2024-12-11 15:03:16.371613] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:23.543 [2024-12-11 15:03:16.371623] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371627] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371630] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.543 [2024-12-11 15:03:16.371636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.543 [2024-12-11 15:03:16.371647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.543 [2024-12-11 15:03:16.371710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.543 [2024-12-11 15:03:16.371716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.543 [2024-12-11 15:03:16.371719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.543 [2024-12-11 15:03:16.371728] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.543 [2024-12-11 15:03:16.371731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.371740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.544 [2024-12-11 15:03:16.371745] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371751] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.371756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.544 [2024-12-11 15:03:16.371762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371765] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.371773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.544 [2024-12-11 15:03:16.371778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.371790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.544 [2024-12-11 15:03:16.371794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.371804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.371810] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.371819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.544 [2024-12-11 15:03:16.371830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3100, cid 0, qid 0 00:22:23.544 [2024-12-11 15:03:16.371835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3280, cid 1, qid 0 00:22:23.544 [2024-12-11 15:03:16.371839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3400, cid 2, qid 0 00:22:23.544 [2024-12-11 15:03:16.371843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3580, cid 3, qid 0 00:22:23.544 [2024-12-11 15:03:16.371849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.544 [2024-12-11 15:03:16.371964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.544 [2024-12-11 15:03:16.371970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.544 [2024-12-11 15:03:16.371973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.371976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.544 [2024-12-11 15:03:16.371980] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:23.544 [2024-12-11 15:03:16.371984] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.371992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.371999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.372004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.372017] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:23.544 [2024-12-11 15:03:16.372026] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.544 [2024-12-11 15:03:16.372115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.544 [2024-12-11 15:03:16.372121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.544 [2024-12-11 15:03:16.372124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.544 [2024-12-11 15:03:16.372183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.372193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.372199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.372209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.544 [2024-12-11 15:03:16.372218] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.544 [2024-12-11 15:03:16.372292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.544 [2024-12-11 15:03:16.372298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.544 [2024-12-11 15:03:16.372301] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372304] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=4096, cccid=4 00:22:23.544 [2024-12-11 15:03:16.372308] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3700) on tqpair(0xc91690): expected_datao=0, payload_size=4096 00:22:23.544 [2024-12-11 15:03:16.372312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372326] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.372329] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413321] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.544 [2024-12-11 15:03:16.413336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.544 [2024-12-11 15:03:16.413342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413346] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.544 [2024-12-11 15:03:16.413362] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:23.544 [2024-12-11 15:03:16.413370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.413380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.413387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.413398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.544 [2024-12-11 15:03:16.413411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.544 [2024-12-11 15:03:16.413498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.544 [2024-12-11 15:03:16.413504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.544 [2024-12-11 15:03:16.413507] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413510] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=4096, cccid=4 00:22:23.544 [2024-12-11 15:03:16.413514] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3700) on tqpair(0xc91690): expected_datao=0, payload_size=4096 00:22:23.544 [2024-12-11 15:03:16.413518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413531] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.413535] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.544 [2024-12-11 15:03:16.456175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.544 [2024-12-11 15:03:16.456178] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.544 [2024-12-11 15:03:16.456195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.456205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.456212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.544 [2024-12-11 15:03:16.456222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.544 [2024-12-11 15:03:16.456234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.544 [2024-12-11 15:03:16.456345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.544 [2024-12-11 15:03:16.456350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.544 [2024-12-11 15:03:16.456354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456357] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=4096, cccid=4 00:22:23.544 [2024-12-11 15:03:16.456360] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3700) on tqpair(0xc91690): expected_datao=0, payload_size=4096 00:22:23.544 [2024-12-11 15:03:16.456365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456378] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.456382] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.497342] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.544 [2024-12-11 15:03:16.497352] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.544 [2024-12-11 15:03:16.497356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.544 [2024-12-11 15:03:16.497359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.544 [2024-12-11 15:03:16.497367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.497375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.497384] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.497391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:23.544 [2024-12-11 15:03:16.497396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:23.545 [2024-12-11 15:03:16.497400] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:23.545 [2024-12-11 15:03:16.497405] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:23.545 [2024-12-11 15:03:16.497410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:23.545 [2024-12-11 15:03:16.497414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:23.545 [2024-12-11 15:03:16.497426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.497437] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.497443] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.497456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:23.545 [2024-12-11 15:03:16.497469] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.545 [2024-12-11 15:03:16.497474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3880, cid 5, qid 0 00:22:23.545 [2024-12-11 15:03:16.497593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.545 [2024-12-11 15:03:16.497600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.545 [2024-12-11 15:03:16.497603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.545 [2024-12-11 15:03:16.497612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.545 [2024-12-11 15:03:16.497618] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.545 [2024-12-11 15:03:16.497621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497624] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3880) on tqpair=0xc91690 00:22:23.545 [2024-12-11 15:03:16.497632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.497644] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.497655] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3880, cid 5, qid 0 00:22:23.545 [2024-12-11 15:03:16.497720] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.545 [2024-12-11 15:03:16.497726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.545 [2024-12-11 15:03:16.497729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3880) on tqpair=0xc91690 00:22:23.545 [2024-12-11 15:03:16.497740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.497749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.497759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3880, cid 5, qid 0 00:22:23.545 [2024-12-11 15:03:16.497842] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.545 [2024-12-11 15:03:16.497848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.545 [2024-12-11 15:03:16.497851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3880) on tqpair=0xc91690 00:22:23.545 [2024-12-11 15:03:16.497862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.497866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.497872] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.497881] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3880, cid 5, qid 0 00:22:23.545 [2024-12-11 15:03:16.497993] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.545 [2024-12-11 15:03:16.497999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.545 [2024-12-11 15:03:16.498002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498005] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3880) on tqpair=0xc91690 00:22:23.545 [2024-12-11 15:03:16.498018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.498029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.498035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.498043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.498050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.498059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.498065] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498069] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc91690) 00:22:23.545 [2024-12-11 15:03:16.498076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.545 [2024-12-11 15:03:16.498086] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3880, cid 5, qid 0 00:22:23.545 [2024-12-11 15:03:16.498091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3700, cid 4, qid 0 00:22:23.545 [2024-12-11 15:03:16.498095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3a00, cid 6, qid 0 00:22:23.545 [2024-12-11 15:03:16.498099] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3b80, cid 7, qid 0 00:22:23.545 [2024-12-11 15:03:16.498236] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.545 [2024-12-11 15:03:16.498243] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.545 [2024-12-11 15:03:16.498246] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498250] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=8192, cccid=5 00:22:23.545 [2024-12-11 15:03:16.498254] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3880) on tqpair(0xc91690): expected_datao=0, payload_size=8192 00:22:23.545 [2024-12-11 15:03:16.498258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498310] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498314] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.545 [2024-12-11 15:03:16.498324] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.545 [2024-12-11 15:03:16.498327] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498331] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=512, cccid=4 00:22:23.545 [2024-12-11 15:03:16.498335] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3700) on tqpair(0xc91690): expected_datao=0, payload_size=512 00:22:23.545 [2024-12-11 15:03:16.498339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498344] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498348] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.545 [2024-12-11 15:03:16.498358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.545 [2024-12-11 15:03:16.498361] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498364] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=512, cccid=6 00:22:23.545 [2024-12-11 15:03:16.498367] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3a00) on tqpair(0xc91690): expected_datao=0, payload_size=512 00:22:23.545 [2024-12-11 15:03:16.498372] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498377] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498380] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:23.545 [2024-12-11 15:03:16.498390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:23.545 [2024-12-11 15:03:16.498393] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498397] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc91690): datao=0, datal=4096, cccid=7 00:22:23.545 [2024-12-11 15:03:16.498401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf3b80) on tqpair(0xc91690): expected_datao=0, payload_size=4096 00:22:23.545 [2024-12-11 15:03:16.498404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498410] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:23.545 [2024-12-11 15:03:16.498418] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:23.546 [2024-12-11 15:03:16.498425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.546 [2024-12-11 15:03:16.498430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.546 [2024-12-11 15:03:16.498433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.546 [2024-12-11 15:03:16.498437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3880) on tqpair=0xc91690 00:22:23.546 [2024-12-11 15:03:16.498447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.546 [2024-12-11 15:03:16.498452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.546 [2024-12-11 15:03:16.498456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.546 [2024-12-11 15:03:16.498459] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3700) on tqpair=0xc91690 00:22:23.546 [2024-12-11 15:03:16.498467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.546 [2024-12-11 15:03:16.498473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.546 [2024-12-11 15:03:16.498476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.546 [2024-12-11 15:03:16.498480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3a00) on tqpair=0xc91690 00:22:23.546 [2024-12-11 15:03:16.498486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.546 [2024-12-11 15:03:16.498491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.546 [2024-12-11 15:03:16.498494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.546 [2024-12-11 15:03:16.498498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3b80) on tqpair=0xc91690 00:22:23.546 ===================================================== 00:22:23.546 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:23.546 ===================================================== 00:22:23.546 Controller Capabilities/Features 00:22:23.546 ================================ 00:22:23.546 Vendor ID: 8086 00:22:23.546 Subsystem Vendor ID: 8086 00:22:23.546 Serial Number: SPDK00000000000001 00:22:23.546 Model Number: SPDK bdev Controller 00:22:23.546 Firmware Version: 25.01 00:22:23.546 Recommended Arb Burst: 6 00:22:23.546 IEEE OUI Identifier: e4 d2 5c 00:22:23.546 Multi-path I/O 00:22:23.546 May have multiple subsystem ports: Yes 00:22:23.546 May have multiple controllers: Yes 00:22:23.546 Associated with SR-IOV VF: No 00:22:23.546 Max Data Transfer Size: 131072 00:22:23.546 Max Number of Namespaces: 32 00:22:23.546 Max Number of I/O Queues: 127 00:22:23.546 NVMe Specification Version (VS): 1.3 00:22:23.546 NVMe Specification Version (Identify): 1.3 00:22:23.546 Maximum Queue Entries: 128 00:22:23.546 Contiguous Queues Required: Yes 00:22:23.546 Arbitration Mechanisms Supported 00:22:23.546 Weighted Round Robin: Not Supported 00:22:23.546 Vendor Specific: Not Supported 00:22:23.546 Reset Timeout: 15000 ms 00:22:23.546 Doorbell Stride: 4 bytes 00:22:23.546 NVM Subsystem Reset: Not Supported 00:22:23.546 Command Sets Supported 00:22:23.546 NVM Command Set: Supported 00:22:23.546 Boot Partition: Not Supported 00:22:23.546 Memory Page Size Minimum: 4096 bytes 00:22:23.546 Memory Page Size Maximum: 4096 bytes 00:22:23.546 Persistent Memory Region: Not Supported 00:22:23.546 Optional Asynchronous Events Supported 00:22:23.546 Namespace Attribute Notices: Supported 00:22:23.546 Firmware Activation Notices: Not Supported 00:22:23.546 ANA Change Notices: Not Supported 00:22:23.546 PLE Aggregate Log Change Notices: Not Supported 00:22:23.546 LBA Status Info Alert Notices: Not Supported 00:22:23.546 EGE Aggregate Log Change Notices: Not Supported 00:22:23.546 Normal NVM Subsystem Shutdown event: Not Supported 00:22:23.546 Zone Descriptor Change Notices: Not Supported 00:22:23.546 Discovery Log Change Notices: Not Supported 00:22:23.546 Controller Attributes 00:22:23.546 128-bit Host Identifier: Supported 00:22:23.546 Non-Operational Permissive Mode: Not Supported 00:22:23.546 NVM Sets: Not Supported 00:22:23.546 Read Recovery Levels: Not Supported 00:22:23.546 Endurance Groups: Not Supported 00:22:23.546 Predictable Latency Mode: Not Supported 00:22:23.546 Traffic Based Keep ALive: Not Supported 00:22:23.546 Namespace Granularity: Not Supported 00:22:23.546 SQ Associations: Not Supported 00:22:23.546 UUID List: Not Supported 00:22:23.546 Multi-Domain Subsystem: Not Supported 00:22:23.546 Fixed Capacity Management: Not Supported 00:22:23.546 Variable Capacity Management: Not Supported 00:22:23.546 Delete Endurance Group: Not Supported 00:22:23.546 Delete NVM Set: Not Supported 00:22:23.546 Extended LBA Formats Supported: Not Supported 00:22:23.546 Flexible Data Placement Supported: Not Supported 00:22:23.546 00:22:23.546 Controller Memory Buffer Support 00:22:23.546 ================================ 00:22:23.546 Supported: No 00:22:23.546 00:22:23.546 Persistent Memory Region Support 00:22:23.546 ================================ 00:22:23.546 Supported: No 00:22:23.546 00:22:23.546 Admin Command Set Attributes 00:22:23.546 ============================ 00:22:23.546 Security Send/Receive: Not Supported 00:22:23.546 Format NVM: Not Supported 00:22:23.546 Firmware Activate/Download: Not Supported 00:22:23.546 Namespace Management: Not Supported 00:22:23.546 Device Self-Test: Not Supported 00:22:23.546 Directives: Not Supported 00:22:23.546 NVMe-MI: Not Supported 00:22:23.546 Virtualization Management: Not Supported 00:22:23.546 Doorbell Buffer Config: Not Supported 00:22:23.546 Get LBA Status Capability: Not Supported 00:22:23.546 Command & Feature Lockdown Capability: Not Supported 00:22:23.546 Abort Command Limit: 4 00:22:23.546 Async Event Request Limit: 4 00:22:23.546 Number of Firmware Slots: N/A 00:22:23.546 Firmware Slot 1 Read-Only: N/A 00:22:23.546 Firmware Activation Without Reset: N/A 00:22:23.546 Multiple Update Detection Support: N/A 00:22:23.546 Firmware Update Granularity: No Information Provided 00:22:23.546 Per-Namespace SMART Log: No 00:22:23.546 Asymmetric Namespace Access Log Page: Not Supported 00:22:23.546 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:23.546 Command Effects Log Page: Supported 00:22:23.546 Get Log Page Extended Data: Supported 00:22:23.546 Telemetry Log Pages: Not Supported 00:22:23.546 Persistent Event Log Pages: Not Supported 00:22:23.546 Supported Log Pages Log Page: May Support 00:22:23.546 Commands Supported & Effects Log Page: Not Supported 00:22:23.546 Feature Identifiers & Effects Log Page:May Support 00:22:23.546 NVMe-MI Commands & Effects Log Page: May Support 00:22:23.546 Data Area 4 for Telemetry Log: Not Supported 00:22:23.546 Error Log Page Entries Supported: 128 00:22:23.546 Keep Alive: Supported 00:22:23.546 Keep Alive Granularity: 10000 ms 00:22:23.546 00:22:23.546 NVM Command Set Attributes 00:22:23.546 ========================== 00:22:23.546 Submission Queue Entry Size 00:22:23.546 Max: 64 00:22:23.546 Min: 64 00:22:23.546 Completion Queue Entry Size 00:22:23.546 Max: 16 00:22:23.546 Min: 16 00:22:23.546 Number of Namespaces: 32 00:22:23.546 Compare Command: Supported 00:22:23.546 Write Uncorrectable Command: Not Supported 00:22:23.546 Dataset Management Command: Supported 00:22:23.546 Write Zeroes Command: Supported 00:22:23.546 Set Features Save Field: Not Supported 00:22:23.546 Reservations: Supported 00:22:23.546 Timestamp: Not Supported 00:22:23.546 Copy: Supported 00:22:23.546 Volatile Write Cache: Present 00:22:23.546 Atomic Write Unit (Normal): 1 00:22:23.546 Atomic Write Unit (PFail): 1 00:22:23.546 Atomic Compare & Write Unit: 1 00:22:23.546 Fused Compare & Write: Supported 00:22:23.546 Scatter-Gather List 00:22:23.546 SGL Command Set: Supported 00:22:23.546 SGL Keyed: Supported 00:22:23.546 SGL Bit Bucket Descriptor: Not Supported 00:22:23.546 SGL Metadata Pointer: Not Supported 00:22:23.546 Oversized SGL: Not Supported 00:22:23.546 SGL Metadata Address: Not Supported 00:22:23.546 SGL Offset: Supported 00:22:23.546 Transport SGL Data Block: Not Supported 00:22:23.546 Replay Protected Memory Block: Not Supported 00:22:23.546 00:22:23.546 Firmware Slot Information 00:22:23.546 ========================= 00:22:23.546 Active slot: 1 00:22:23.546 Slot 1 Firmware Revision: 25.01 00:22:23.546 00:22:23.546 00:22:23.546 Commands Supported and Effects 00:22:23.546 ============================== 00:22:23.546 Admin Commands 00:22:23.546 -------------- 00:22:23.546 Get Log Page (02h): Supported 00:22:23.546 Identify (06h): Supported 00:22:23.546 Abort (08h): Supported 00:22:23.546 Set Features (09h): Supported 00:22:23.546 Get Features (0Ah): Supported 00:22:23.546 Asynchronous Event Request (0Ch): Supported 00:22:23.546 Keep Alive (18h): Supported 00:22:23.546 I/O Commands 00:22:23.546 ------------ 00:22:23.546 Flush (00h): Supported LBA-Change 00:22:23.546 Write (01h): Supported LBA-Change 00:22:23.546 Read (02h): Supported 00:22:23.546 Compare (05h): Supported 00:22:23.546 Write Zeroes (08h): Supported LBA-Change 00:22:23.546 Dataset Management (09h): Supported LBA-Change 00:22:23.546 Copy (19h): Supported LBA-Change 00:22:23.546 00:22:23.546 Error Log 00:22:23.546 ========= 00:22:23.546 00:22:23.546 Arbitration 00:22:23.546 =========== 00:22:23.546 Arbitration Burst: 1 00:22:23.546 00:22:23.546 Power Management 00:22:23.546 ================ 00:22:23.546 Number of Power States: 1 00:22:23.546 Current Power State: Power State #0 00:22:23.546 Power State #0: 00:22:23.546 Max Power: 0.00 W 00:22:23.546 Non-Operational State: Operational 00:22:23.547 Entry Latency: Not Reported 00:22:23.547 Exit Latency: Not Reported 00:22:23.547 Relative Read Throughput: 0 00:22:23.547 Relative Read Latency: 0 00:22:23.547 Relative Write Throughput: 0 00:22:23.547 Relative Write Latency: 0 00:22:23.547 Idle Power: Not Reported 00:22:23.547 Active Power: Not Reported 00:22:23.547 Non-Operational Permissive Mode: Not Supported 00:22:23.547 00:22:23.547 Health Information 00:22:23.547 ================== 00:22:23.547 Critical Warnings: 00:22:23.547 Available Spare Space: OK 00:22:23.547 Temperature: OK 00:22:23.547 Device Reliability: OK 00:22:23.547 Read Only: No 00:22:23.547 Volatile Memory Backup: OK 00:22:23.547 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:23.547 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:23.547 Available Spare: 0% 00:22:23.547 Available Spare Threshold: 0% 00:22:23.547 Life Percentage Used:[2024-12-11 15:03:16.498578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc91690) 00:22:23.547 [2024-12-11 15:03:16.498589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.547 [2024-12-11 15:03:16.498600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3b80, cid 7, qid 0 00:22:23.547 [2024-12-11 15:03:16.498725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.547 [2024-12-11 15:03:16.498732] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.547 [2024-12-11 15:03:16.498735] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498738] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3b80) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498765] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:23.547 [2024-12-11 15:03:16.498775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3100) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.547 [2024-12-11 15:03:16.498785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3280) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.547 [2024-12-11 15:03:16.498794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3400) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.547 [2024-12-11 15:03:16.498803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3580) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:23.547 [2024-12-11 15:03:16.498813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498817] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc91690) 00:22:23.547 [2024-12-11 15:03:16.498828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.547 [2024-12-11 15:03:16.498840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3580, cid 3, qid 0 00:22:23.547 [2024-12-11 15:03:16.498921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.547 [2024-12-11 15:03:16.498927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.547 [2024-12-11 15:03:16.498930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3580) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.498939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.498946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc91690) 00:22:23.547 [2024-12-11 15:03:16.498952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.547 [2024-12-11 15:03:16.498963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3580, cid 3, qid 0 00:22:23.547 [2024-12-11 15:03:16.499036] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.547 [2024-12-11 15:03:16.499042] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.547 [2024-12-11 15:03:16.499045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.499049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3580) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.499053] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:23.547 [2024-12-11 15:03:16.499057] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:23.547 [2024-12-11 15:03:16.499065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.499069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.499072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc91690) 00:22:23.547 [2024-12-11 15:03:16.499078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.547 [2024-12-11 15:03:16.499087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3580, cid 3, qid 0 00:22:23.547 [2024-12-11 15:03:16.503164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.547 [2024-12-11 15:03:16.503172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.547 [2024-12-11 15:03:16.503176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.503179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3580) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.503189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.503193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.503196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc91690) 00:22:23.547 [2024-12-11 15:03:16.503202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:23.547 [2024-12-11 15:03:16.503214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf3580, cid 3, qid 0 00:22:23.547 [2024-12-11 15:03:16.503398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:23.547 [2024-12-11 15:03:16.503404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:23.547 [2024-12-11 15:03:16.503407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:23.547 [2024-12-11 15:03:16.503410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf3580) on tqpair=0xc91690 00:22:23.547 [2024-12-11 15:03:16.503417] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:22:23.547 0% 00:22:23.547 Data Units Read: 0 00:22:23.547 Data Units Written: 0 00:22:23.547 Host Read Commands: 0 00:22:23.547 Host Write Commands: 0 00:22:23.547 Controller Busy Time: 0 minutes 00:22:23.547 Power Cycles: 0 00:22:23.547 Power On Hours: 0 hours 00:22:23.547 Unsafe Shutdowns: 0 00:22:23.547 Unrecoverable Media Errors: 0 00:22:23.547 Lifetime Error Log Entries: 0 00:22:23.547 Warning Temperature Time: 0 minutes 00:22:23.547 Critical Temperature Time: 0 minutes 00:22:23.547 00:22:23.547 Number of Queues 00:22:23.547 ================ 00:22:23.547 Number of I/O Submission Queues: 127 00:22:23.547 Number of I/O Completion Queues: 127 00:22:23.547 00:22:23.547 Active Namespaces 00:22:23.547 ================= 00:22:23.547 Namespace ID:1 00:22:23.547 Error Recovery Timeout: Unlimited 00:22:23.547 Command Set Identifier: NVM (00h) 00:22:23.547 Deallocate: Supported 00:22:23.547 Deallocated/Unwritten Error: Not Supported 00:22:23.547 Deallocated Read Value: Unknown 00:22:23.547 Deallocate in Write Zeroes: Not Supported 00:22:23.547 Deallocated Guard Field: 0xFFFF 00:22:23.547 Flush: Supported 00:22:23.547 Reservation: Supported 00:22:23.547 Namespace Sharing Capabilities: Multiple Controllers 00:22:23.547 Size (in LBAs): 131072 (0GiB) 00:22:23.547 Capacity (in LBAs): 131072 (0GiB) 00:22:23.547 Utilization (in LBAs): 131072 (0GiB) 00:22:23.547 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:23.547 EUI64: ABCDEF0123456789 00:22:23.547 UUID: acd0b1f7-857f-459b-a226-b0c710dd57be 00:22:23.547 Thin Provisioning: Not Supported 00:22:23.547 Per-NS Atomic Units: Yes 00:22:23.547 Atomic Boundary Size (Normal): 0 00:22:23.547 Atomic Boundary Size (PFail): 0 00:22:23.547 Atomic Boundary Offset: 0 00:22:23.547 Maximum Single Source Range Length: 65535 00:22:23.547 Maximum Copy Length: 65535 00:22:23.547 Maximum Source Range Count: 1 00:22:23.547 NGUID/EUI64 Never Reused: No 00:22:23.547 Namespace Write Protected: No 00:22:23.547 Number of LBA Formats: 1 00:22:23.547 Current LBA Format: LBA Format #00 00:22:23.547 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:23.547 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:23.547 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:23.547 rmmod nvme_tcp 00:22:23.547 rmmod nvme_fabrics 00:22:23.547 rmmod nvme_keyring 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3193494 ']' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3193494 ']' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3193494' 00:22:23.807 killing process with pid 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3193494 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.807 15:03:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.345 00:22:26.345 real 0m9.363s 00:22:26.345 user 0m5.790s 00:22:26.345 sys 0m4.840s 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.345 ************************************ 00:22:26.345 END TEST nvmf_identify 00:22:26.345 ************************************ 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:26.345 ************************************ 00:22:26.345 START TEST nvmf_perf 00:22:26.345 ************************************ 00:22:26.345 15:03:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:26.345 * Looking for test storage... 00:22:26.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:26.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.345 --rc genhtml_branch_coverage=1 00:22:26.345 --rc genhtml_function_coverage=1 00:22:26.345 --rc genhtml_legend=1 00:22:26.345 --rc geninfo_all_blocks=1 00:22:26.345 --rc geninfo_unexecuted_blocks=1 00:22:26.345 00:22:26.345 ' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.345 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.346 15:03:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:32.923 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:32.923 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:32.923 Found net devices under 0000:86:00.0: cvl_0_0 00:22:32.923 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:32.924 Found net devices under 0000:86:00.1: cvl_0_1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.924 15:03:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:22:32.924 00:22:32.924 --- 10.0.0.2 ping statistics --- 00:22:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.924 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:32.924 00:22:32.924 --- 10.0.0.1 ping statistics --- 00:22:32.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.924 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3197094 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3197094 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3197094 ']' 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.924 [2024-12-11 15:03:25.256231] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:32.924 [2024-12-11 15:03:25.256275] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.924 [2024-12-11 15:03:25.337688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:32.924 [2024-12-11 15:03:25.378924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.924 [2024-12-11 15:03:25.378961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.924 [2024-12-11 15:03:25.378971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.924 [2024-12-11 15:03:25.378978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.924 [2024-12-11 15:03:25.378984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.924 [2024-12-11 15:03:25.380466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.924 [2024-12-11 15:03:25.380574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:32.924 [2024-12-11 15:03:25.380683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.924 [2024-12-11 15:03:25.380683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:22:32.924 15:03:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_subsystem_config 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_get_config bdev 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:36.198 15:03:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.198 [2024-12-11 15:03:29.163892] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.198 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.455 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.455 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.712 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:36.712 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:36.968 15:03:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.968 [2024-12-11 15:03:29.982987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.968 15:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:37.225 15:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:37.225 15:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:37.225 15:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:37.225 15:03:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:38.593 Initializing NVMe Controllers 00:22:38.593 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:38.593 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:38.593 Initialization complete. Launching workers. 00:22:38.594 ======================================================== 00:22:38.594 Latency(us) 00:22:38.594 Device Information : IOPS MiB/s Average min max 00:22:38.594 PCIE (0000:5e:00.0) NSID 1 from core 0: 97194.39 379.67 328.81 30.03 8208.59 00:22:38.594 ======================================================== 00:22:38.594 Total : 97194.39 379.67 328.81 30.03 8208.59 00:22:38.594 00:22:38.594 15:03:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:39.961 Initializing NVMe Controllers 00:22:39.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:39.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:39.961 Initialization complete. Launching workers. 00:22:39.961 ======================================================== 00:22:39.961 Latency(us) 00:22:39.961 Device Information : IOPS MiB/s Average min max 00:22:39.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 64.00 0.25 15966.42 106.76 45252.68 00:22:39.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 78.00 0.30 13085.63 7186.51 47888.50 00:22:39.961 ======================================================== 00:22:39.961 Total : 142.00 0.55 14384.01 106.76 47888.50 00:22:39.961 00:22:39.961 15:03:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:41.331 Initializing NVMe Controllers 00:22:41.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:41.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:41.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:41.331 Initialization complete. Launching workers. 00:22:41.331 ======================================================== 00:22:41.331 Latency(us) 00:22:41.331 Device Information : IOPS MiB/s Average min max 00:22:41.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10907.35 42.61 2937.08 508.24 6352.01 00:22:41.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3830.77 14.96 8387.37 6233.49 15837.11 00:22:41.331 ======================================================== 00:22:41.331 Total : 14738.12 57.57 4353.73 508.24 15837.11 00:22:41.331 00:22:41.331 15:03:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:41.331 15:03:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:41.331 15:03:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:43.898 Initializing NVMe Controllers 00:22:43.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.898 Controller IO queue size 128, less than required. 00:22:43.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.898 Controller IO queue size 128, less than required. 00:22:43.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:43.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.898 Initialization complete. Launching workers. 00:22:43.898 ======================================================== 00:22:43.898 Latency(us) 00:22:43.898 Device Information : IOPS MiB/s Average min max 00:22:43.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1758.99 439.75 73964.58 49211.18 129068.03 00:22:43.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.00 152.25 224549.59 73695.73 344747.01 00:22:43.898 ======================================================== 00:22:43.898 Total : 2367.98 592.00 112691.89 49211.18 344747.01 00:22:43.898 00:22:43.898 15:03:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:44.183 No valid NVMe controllers or AIO or URING devices found 00:22:44.183 Initializing NVMe Controllers 00:22:44.183 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.183 Controller IO queue size 128, less than required. 00:22:44.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.183 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:44.183 Controller IO queue size 128, less than required. 00:22:44.183 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:44.183 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:44.183 WARNING: Some requested NVMe devices were skipped 00:22:44.183 15:03:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:46.720 Initializing NVMe Controllers 00:22:46.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:46.720 Controller IO queue size 128, less than required. 00:22:46.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.720 Controller IO queue size 128, less than required. 00:22:46.720 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:46.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:46.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:46.720 Initialization complete. Launching workers. 00:22:46.720 00:22:46.720 ==================== 00:22:46.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:46.720 TCP transport: 00:22:46.720 polls: 14507 00:22:46.720 idle_polls: 11400 00:22:46.720 sock_completions: 3107 00:22:46.720 nvme_completions: 5977 00:22:46.720 submitted_requests: 9014 00:22:46.720 queued_requests: 1 00:22:46.720 00:22:46.720 ==================== 00:22:46.720 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:46.720 TCP transport: 00:22:46.720 polls: 14986 00:22:46.720 idle_polls: 11102 00:22:46.720 sock_completions: 3884 00:22:46.720 nvme_completions: 6537 00:22:46.720 submitted_requests: 9750 00:22:46.720 queued_requests: 1 00:22:46.720 ======================================================== 00:22:46.720 Latency(us) 00:22:46.720 Device Information : IOPS MiB/s Average min max 00:22:46.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1492.28 373.07 86608.76 55520.34 133196.85 00:22:46.720 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1632.12 408.03 79256.91 53387.71 127662.32 00:22:46.720 ======================================================== 00:22:46.720 Total : 3124.41 781.10 82768.31 53387.71 133196.85 00:22:46.720 00:22:46.977 15:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:46.977 15:03:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.977 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.977 rmmod nvme_tcp 00:22:47.233 rmmod nvme_fabrics 00:22:47.233 rmmod nvme_keyring 00:22:47.233 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.233 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:22:47.233 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3197094 ']' 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3197094 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3197094 ']' 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3197094 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197094 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197094' 00:22:47.234 killing process with pid 3197094 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3197094 00:22:47.234 15:03:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3197094 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:48.604 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:22:48.605 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:48.605 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:48.605 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.605 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.605 15:03:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.140 00:22:51.140 real 0m24.715s 00:22:51.140 user 1m4.567s 00:22:51.140 sys 0m8.340s 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:51.140 ************************************ 00:22:51.140 END TEST nvmf_perf 00:22:51.140 ************************************ 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.140 ************************************ 00:22:51.140 START TEST nvmf_fio_host 00:22:51.140 ************************************ 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:51.140 * Looking for test storage... 00:22:51.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:51.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.140 --rc genhtml_branch_coverage=1 00:22:51.140 --rc genhtml_function_coverage=1 00:22:51.140 --rc genhtml_legend=1 00:22:51.140 --rc geninfo_all_blocks=1 00:22:51.140 --rc geninfo_unexecuted_blocks=1 00:22:51.140 00:22:51.140 ' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:51.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.140 --rc genhtml_branch_coverage=1 00:22:51.140 --rc genhtml_function_coverage=1 00:22:51.140 --rc genhtml_legend=1 00:22:51.140 --rc geninfo_all_blocks=1 00:22:51.140 --rc geninfo_unexecuted_blocks=1 00:22:51.140 00:22:51.140 ' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:51.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.140 --rc genhtml_branch_coverage=1 00:22:51.140 --rc genhtml_function_coverage=1 00:22:51.140 --rc genhtml_legend=1 00:22:51.140 --rc geninfo_all_blocks=1 00:22:51.140 --rc geninfo_unexecuted_blocks=1 00:22:51.140 00:22:51.140 ' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:51.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.140 --rc genhtml_branch_coverage=1 00:22:51.140 --rc genhtml_function_coverage=1 00:22:51.140 --rc genhtml_legend=1 00:22:51.140 --rc geninfo_all_blocks=1 00:22:51.140 --rc geninfo_unexecuted_blocks=1 00:22:51.140 00:22:51.140 ' 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.140 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:51.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.141 15:03:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.710 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.710 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:57.710 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:57.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:22:57.711 00:22:57.711 --- 10.0.0.2 ping statistics --- 00:22:57.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.711 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:57.711 00:22:57.711 --- 10.0.0.1 ping statistics --- 00:22:57.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.711 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3203312 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3203312 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3203312 ']' 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:57.711 15:03:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.711 [2024-12-11 15:03:49.957549] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:22:57.711 [2024-12-11 15:03:49.957603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.711 [2024-12-11 15:03:50.038843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.711 [2024-12-11 15:03:50.083458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.711 [2024-12-11 15:03:50.083499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.711 [2024-12-11 15:03:50.083509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.711 [2024-12-11 15:03:50.083517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.711 [2024-12-11 15:03:50.083523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.711 [2024-12-11 15:03:50.085233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.711 [2024-12-11 15:03:50.085279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.711 [2024-12-11 15:03:50.085385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.711 [2024-12-11 15:03:50.085386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:57.711 [2024-12-11 15:03:50.352859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:57.711 Malloc1 00:22:57.711 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.968 15:03:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:58.224 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.224 [2024-12-11 15:03:51.213642] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.224 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:58.481 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:22:58.482 15:03:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:58.738 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:58.738 fio-3.35 00:22:58.738 Starting 1 thread 00:23:01.260 00:23:01.260 test: (groupid=0, jobs=1): err= 0: pid=3203793: Wed Dec 11 15:03:54 2024 00:23:01.260 read: IOPS=11.7k, BW=45.6MiB/s (47.8MB/s)(91.4MiB/2005msec) 00:23:01.260 slat (nsec): min=1598, max=251702, avg=1756.64, stdev=2259.42 00:23:01.260 clat (usec): min=3036, max=10452, avg=6041.78, stdev=476.84 00:23:01.260 lat (usec): min=3073, max=10454, avg=6043.54, stdev=476.73 00:23:01.260 clat percentiles (usec): 00:23:01.260 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:23:01.260 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 6063], 60.00th=[ 6194], 00:23:01.260 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:23:01.260 | 99.00th=[ 7046], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[10028], 00:23:01.260 | 99.99th=[10421] 00:23:01.260 bw ( KiB/s): min=45648, max=47432, per=99.95%, avg=46660.00, stdev=743.97, samples=4 00:23:01.260 iops : min=11412, max=11858, avg=11665.00, stdev=185.99, samples=4 00:23:01.260 write: IOPS=11.6k, BW=45.3MiB/s (47.5MB/s)(90.8MiB/2005msec); 0 zone resets 00:23:01.260 slat (nsec): min=1628, max=226089, avg=1811.80, stdev=1656.74 00:23:01.260 clat (usec): min=2428, max=9224, avg=4915.43, stdev=384.47 00:23:01.260 lat (usec): min=2443, max=9226, avg=4917.25, stdev=384.46 00:23:01.260 clat percentiles (usec): 00:23:01.260 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:01.260 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:23:01.260 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:23:01.260 | 99.00th=[ 5735], 99.50th=[ 5932], 99.90th=[ 7046], 99.95th=[ 8848], 00:23:01.260 | 99.99th=[ 9241] 00:23:01.260 bw ( KiB/s): min=45976, max=46848, per=100.00%, avg=46352.00, stdev=382.27, samples=4 00:23:01.260 iops : min=11494, max=11712, avg=11588.00, stdev=95.57, samples=4 00:23:01.260 lat (msec) : 4=0.44%, 10=99.53%, 20=0.03% 00:23:01.260 cpu : usr=74.95%, sys=24.10%, ctx=91, majf=0, minf=3 00:23:01.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:01.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:01.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:01.260 issued rwts: total=23399,23234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:01.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:01.260 00:23:01.260 Run status group 0 (all jobs): 00:23:01.260 READ: bw=45.6MiB/s (47.8MB/s), 45.6MiB/s-45.6MiB/s (47.8MB/s-47.8MB/s), io=91.4MiB (95.8MB), run=2005-2005msec 00:23:01.260 WRITE: bw=45.3MiB/s (47.5MB/s), 45.3MiB/s-45.3MiB/s (47.5MB/s-47.5MB/s), io=90.8MiB (95.2MB), run=2005-2005msec 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:23:01.260 15:03:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:01.517 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:01.517 fio-3.35 00:23:01.517 Starting 1 thread 00:23:04.038 00:23:04.038 test: (groupid=0, jobs=1): err= 0: pid=3204370: Wed Dec 11 15:03:56 2024 00:23:04.038 read: IOPS=10.4k, BW=162MiB/s (170MB/s)(332MiB/2050msec) 00:23:04.038 slat (nsec): min=2560, max=84363, avg=2831.29, stdev=1216.25 00:23:04.038 clat (usec): min=2423, max=53167, avg=7306.99, stdev=4682.82 00:23:04.038 lat (usec): min=2425, max=53170, avg=7309.82, stdev=4682.83 00:23:04.038 clat percentiles (usec): 00:23:04.038 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5407], 00:23:04.038 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7242], 00:23:04.038 | 70.00th=[ 7635], 80.00th=[ 8094], 90.00th=[ 9110], 95.00th=[10159], 00:23:04.038 | 99.00th=[43779], 99.50th=[47973], 99.90th=[52167], 99.95th=[52691], 00:23:04.038 | 99.99th=[53216] 00:23:04.038 bw ( KiB/s): min=80064, max=93184, per=52.01%, avg=86304.00, stdev=5490.65, samples=4 00:23:04.038 iops : min= 5004, max= 5824, avg=5394.00, stdev=343.17, samples=4 00:23:04.038 write: IOPS=6176, BW=96.5MiB/s (101MB/s)(176MiB/1822msec); 0 zone resets 00:23:04.038 slat (usec): min=29, max=348, avg=31.66, stdev= 6.39 00:23:04.038 clat (usec): min=3822, max=54700, avg=8684.51, stdev=1984.87 00:23:04.038 lat (usec): min=3853, max=54731, avg=8716.17, stdev=1985.47 00:23:04.038 clat percentiles (usec): 00:23:04.038 | 1.00th=[ 5604], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7439], 00:23:04.038 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:04.038 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11469], 00:23:04.038 | 99.00th=[12649], 99.50th=[13042], 99.90th=[16319], 99.95th=[54264], 00:23:04.038 | 99.99th=[54789] 00:23:04.038 bw ( KiB/s): min=84288, max=97280, per=91.03%, avg=89960.00, stdev=5459.25, samples=4 00:23:04.038 iops : min= 5268, max= 6080, avg=5622.50, stdev=341.20, samples=4 00:23:04.038 lat (msec) : 4=1.73%, 10=88.82%, 20=8.67%, 50=0.54%, 100=0.24% 00:23:04.038 cpu : usr=85.85%, sys=13.41%, ctx=48, majf=0, minf=3 00:23:04.038 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:23:04.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.038 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:04.038 issued rwts: total=21262,11254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:04.038 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:04.038 00:23:04.038 Run status group 0 (all jobs): 00:23:04.038 READ: bw=162MiB/s (170MB/s), 162MiB/s-162MiB/s (170MB/s-170MB/s), io=332MiB (348MB), run=2050-2050msec 00:23:04.038 WRITE: bw=96.5MiB/s (101MB/s), 96.5MiB/s-96.5MiB/s (101MB/s-101MB/s), io=176MiB (184MB), run=1822-1822msec 00:23:04.038 15:03:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.296 rmmod nvme_tcp 00:23:04.296 rmmod nvme_fabrics 00:23:04.296 rmmod nvme_keyring 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3203312 ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3203312 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3203312 ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3203312 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3203312 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3203312' 00:23:04.296 killing process with pid 3203312 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3203312 00:23:04.296 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3203312 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.555 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.556 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.556 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.556 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.556 15:03:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.462 15:03:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.462 00:23:06.462 real 0m15.737s 00:23:06.462 user 0m46.266s 00:23:06.462 sys 0m6.487s 00:23:06.462 15:03:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.462 15:03:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.462 ************************************ 00:23:06.462 END TEST nvmf_fio_host 00:23:06.462 ************************************ 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.722 ************************************ 00:23:06.722 START TEST nvmf_failover 00:23:06.722 ************************************ 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:06.722 * Looking for test storage... 00:23:06.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:06.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.722 --rc genhtml_branch_coverage=1 00:23:06.722 --rc genhtml_function_coverage=1 00:23:06.722 --rc genhtml_legend=1 00:23:06.722 --rc geninfo_all_blocks=1 00:23:06.722 --rc geninfo_unexecuted_blocks=1 00:23:06.722 00:23:06.722 ' 00:23:06.722 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:06.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.722 --rc genhtml_branch_coverage=1 00:23:06.722 --rc genhtml_function_coverage=1 00:23:06.722 --rc genhtml_legend=1 00:23:06.722 --rc geninfo_all_blocks=1 00:23:06.723 --rc geninfo_unexecuted_blocks=1 00:23:06.723 00:23:06.723 ' 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.723 --rc genhtml_branch_coverage=1 00:23:06.723 --rc genhtml_function_coverage=1 00:23:06.723 --rc genhtml_legend=1 00:23:06.723 --rc geninfo_all_blocks=1 00:23:06.723 --rc geninfo_unexecuted_blocks=1 00:23:06.723 00:23:06.723 ' 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:06.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.723 --rc genhtml_branch_coverage=1 00:23:06.723 --rc genhtml_function_coverage=1 00:23:06.723 --rc genhtml_legend=1 00:23:06.723 --rc geninfo_all_blocks=1 00:23:06.723 --rc geninfo_unexecuted_blocks=1 00:23:06.723 00:23:06.723 ' 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.723 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.983 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.983 15:03:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.551 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.551 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:13.551 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:13.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:23:13.552 00:23:13.552 --- 10.0.0.2 ping statistics --- 00:23:13.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.552 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:13.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:13.552 00:23:13.552 --- 10.0.0.1 ping statistics --- 00:23:13.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.552 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3208251 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3208251 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3208251 ']' 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.552 15:04:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.552 [2024-12-11 15:04:05.785956] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:13.552 [2024-12-11 15:04:05.786004] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.552 [2024-12-11 15:04:05.864953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:13.552 [2024-12-11 15:04:05.906501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.552 [2024-12-11 15:04:05.906535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.552 [2024-12-11 15:04:05.906542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.552 [2024-12-11 15:04:05.906548] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.552 [2024-12-11 15:04:05.906557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.552 [2024-12-11 15:04:05.907853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.552 [2024-12-11 15:04:05.907957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.552 [2024-12-11 15:04:05.907959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.810 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:13.810 [2024-12-11 15:04:06.831778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.068 15:04:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:14.068 Malloc0 00:23:14.068 15:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.325 15:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.582 15:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.840 [2024-12-11 15:04:07.674416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.840 15:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.840 [2024-12-11 15:04:07.870934] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:15.098 15:04:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:15.098 [2024-12-11 15:04:08.067590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3208607 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3208607 /var/tmp/bdevperf.sock 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3208607 ']' 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.098 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.357 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.357 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:15.357 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:15.923 NVMe0n1 00:23:15.923 15:04:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:16.181 00:23:16.181 15:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.181 15:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3208837 00:23:16.181 15:04:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:17.115 15:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:17.373 [2024-12-11 15:04:10.290629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.373 [2024-12-11 15:04:10.290690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.373 [2024-12-11 15:04:10.290701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.373 [2024-12-11 15:04:10.290709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.290992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 [2024-12-11 15:04:10.291103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b5f30 is same with the state(6) to be set 00:23:17.374 15:04:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:20.657 15:04:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:20.915 00:23:20.915 15:04:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:21.173 15:04:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:24.456 15:04:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.456 [2024-12-11 15:04:17.191476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.456 15:04:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:25.391 15:04:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:25.391 [2024-12-11 15:04:18.419627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.391 [2024-12-11 15:04:18.419947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7031a0 is same with the state(6) to be set 00:23:25.649 15:04:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3208837 00:23:32.212 { 00:23:32.212 "results": [ 00:23:32.212 { 00:23:32.212 "job": "NVMe0n1", 00:23:32.212 "core_mask": "0x1", 00:23:32.212 "workload": "verify", 00:23:32.212 "status": "finished", 00:23:32.212 "verify_range": { 00:23:32.212 "start": 0, 00:23:32.212 "length": 16384 00:23:32.212 }, 00:23:32.212 "queue_depth": 128, 00:23:32.212 "io_size": 4096, 00:23:32.212 "runtime": 15.003801, 00:23:32.212 "iops": 10876.310609558204, 00:23:32.212 "mibps": 42.485588318586736, 00:23:32.212 "io_failed": 13325, 00:23:32.212 "io_timeout": 0, 00:23:32.212 "avg_latency_us": 10857.73303020159, 00:23:32.212 "min_latency_us": 445.2173913043478, 00:23:32.212 "max_latency_us": 14930.810434782608 00:23:32.212 } 00:23:32.212 ], 00:23:32.212 "core_count": 1 00:23:32.212 } 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3208607 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3208607 ']' 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3208607 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208607 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:32.212 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:32.213 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208607' 00:23:32.213 killing process with pid 3208607 00:23:32.213 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3208607 00:23:32.213 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3208607 00:23:32.213 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:23:32.213 [2024-12-11 15:04:08.142177] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:32.213 [2024-12-11 15:04:08.142227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208607 ] 00:23:32.213 [2024-12-11 15:04:08.216270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.213 [2024-12-11 15:04:08.257313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.213 Running I/O for 15 seconds... 00:23:32.213 10976.00 IOPS, 42.88 MiB/s [2024-12-11T14:04:25.261Z] [2024-12-11 15:04:10.292337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.213 [2024-12-11 15:04:10.292873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.213 [2024-12-11 15:04:10.292879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.292992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.292999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.293013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.293041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.293055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.214 [2024-12-11 15:04:10.293069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.214 [2024-12-11 15:04:10.293389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.214 [2024-12-11 15:04:10.293396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.215 [2024-12-11 15:04:10.293531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.215 [2024-12-11 15:04:10.293948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.215 [2024-12-11 15:04:10.293956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.293962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.293970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.293976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.293984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.293990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.293998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.216 [2024-12-11 15:04:10.294225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.216 [2024-12-11 15:04:10.294253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98728 len:8 PRP1 0x0 PRP2 0x0 00:23:32.216 [2024-12-11 15:04:10.294259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.216 [2024-12-11 15:04:10.294273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.216 [2024-12-11 15:04:10.294279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98736 len:8 PRP1 0x0 PRP2 0x0 00:23:32.216 [2024-12-11 15:04:10.294285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294328] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:32.216 [2024-12-11 15:04:10.294349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.216 [2024-12-11 15:04:10.294357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.216 [2024-12-11 15:04:10.294372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.216 [2024-12-11 15:04:10.294386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.216 [2024-12-11 15:04:10.294399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:10.294405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:32.216 [2024-12-11 15:04:10.294440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05fe0 (9): Bad file descriptor 00:23:32.216 [2024-12-11 15:04:10.297297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:32.216 [2024-12-11 15:04:10.400507] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:32.216 10416.00 IOPS, 40.69 MiB/s [2024-12-11T14:04:25.264Z] 10724.33 IOPS, 41.89 MiB/s [2024-12-11T14:04:25.264Z] 10861.75 IOPS, 42.43 MiB/s [2024-12-11T14:04:25.264Z] [2024-12-11 15:04:13.971996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.216 [2024-12-11 15:04:13.972164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.216 [2024-12-11 15:04:13.972171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.217 [2024-12-11 15:04:13.972465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.217 [2024-12-11 15:04:13.972662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.217 [2024-12-11 15:04:13.972668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.972991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.218 [2024-12-11 15:04:13.973204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.218 [2024-12-11 15:04:13.973212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.219 [2024-12-11 15:04:13.973361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.219 [2024-12-11 15:04:13.973744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.219 [2024-12-11 15:04:13.973750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.220 [2024-12-11 15:04:13.973764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64672 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973958] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.973976] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.220 [2024-12-11 15:04:13.973981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.220 [2024-12-11 15:04:13.973986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63984 len:8 PRP1 0x0 PRP2 0x0 00:23:32.220 [2024-12-11 15:04:13.973992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.974035] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:32.220 [2024-12-11 15:04:13.974056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.220 [2024-12-11 15:04:13.974064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.974071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.220 [2024-12-11 15:04:13.974077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.974084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.220 [2024-12-11 15:04:13.974090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.974097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.220 [2024-12-11 15:04:13.974104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:13.974110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:32.220 [2024-12-11 15:04:13.974133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05fe0 (9): Bad file descriptor 00:23:32.220 [2024-12-11 15:04:13.976965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:32.220 [2024-12-11 15:04:13.999090] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:32.220 10845.00 IOPS, 42.36 MiB/s [2024-12-11T14:04:25.268Z] 10883.17 IOPS, 42.51 MiB/s [2024-12-11T14:04:25.268Z] 10903.57 IOPS, 42.59 MiB/s [2024-12-11T14:04:25.268Z] 10945.50 IOPS, 42.76 MiB/s [2024-12-11T14:04:25.268Z] 10960.67 IOPS, 42.82 MiB/s [2024-12-11T14:04:25.268Z] [2024-12-11 15:04:18.420867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.420989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.420997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.220 [2024-12-11 15:04:18.421174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.220 [2024-12-11 15:04:18.421182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:32.221 [2024-12-11 15:04:18.421512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.221 [2024-12-11 15:04:18.421783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.221 [2024-12-11 15:04:18.421789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.421989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.421997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:32.222 [2024-12-11 15:04:18.422265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.222 [2024-12-11 15:04:18.422294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72536 len:8 PRP1 0x0 PRP2 0x0 00:23:32.222 [2024-12-11 15:04:18.422301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.222 [2024-12-11 15:04:18.422315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.222 [2024-12-11 15:04:18.422321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72544 len:8 PRP1 0x0 PRP2 0x0 00:23:32.222 [2024-12-11 15:04:18.422327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.222 [2024-12-11 15:04:18.422339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.222 [2024-12-11 15:04:18.422344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72552 len:8 PRP1 0x0 PRP2 0x0 00:23:32.222 [2024-12-11 15:04:18.422351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.222 [2024-12-11 15:04:18.422362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.222 [2024-12-11 15:04:18.422368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72560 len:8 PRP1 0x0 PRP2 0x0 00:23:32.222 [2024-12-11 15:04:18.422374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.222 [2024-12-11 15:04:18.422381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72568 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72576 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72584 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72592 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72600 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72608 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72616 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72624 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72632 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72640 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72648 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72656 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72664 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72672 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422729] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422752] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422758] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.223 [2024-12-11 15:04:18.422938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.223 [2024-12-11 15:04:18.422944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.223 [2024-12-11 15:04:18.422949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:23:32.223 [2024-12-11 15:04:18.422956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.422964] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.422968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.422974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.422980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.422987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.422991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.422997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.423014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.423019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.423037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.423042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.423059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.423065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.423082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.423088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:32.224 [2024-12-11 15:04:18.423106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:32.224 [2024-12-11 15:04:18.423112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:23:32.224 [2024-12-11 15:04:18.423118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423167] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:32.224 [2024-12-11 15:04:18.423190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.224 [2024-12-11 15:04:18.423198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.224 [2024-12-11 15:04:18.423214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.224 [2024-12-11 15:04:18.423228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.224 [2024-12-11 15:04:18.423241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.224 [2024-12-11 15:04:18.423247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:32.224 [2024-12-11 15:04:18.423271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e05fe0 (9): Bad file descriptor 00:23:32.224 [2024-12-11 15:04:18.426120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:32.224 [2024-12-11 15:04:18.575550] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:32.224 10803.30 IOPS, 42.20 MiB/s [2024-12-11T14:04:25.272Z] 10816.09 IOPS, 42.25 MiB/s [2024-12-11T14:04:25.272Z] 10843.17 IOPS, 42.36 MiB/s [2024-12-11T14:04:25.272Z] 10853.15 IOPS, 42.40 MiB/s [2024-12-11T14:04:25.272Z] 10862.00 IOPS, 42.43 MiB/s 00:23:32.224 Latency(us) 00:23:32.224 [2024-12-11T14:04:25.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:32.224 Verification LBA range: start 0x0 length 0x4000 00:23:32.224 NVMe0n1 : 15.00 10876.31 42.49 888.11 0.00 10857.73 445.22 14930.81 00:23:32.224 [2024-12-11T14:04:25.272Z] =================================================================================================================== 00:23:32.224 [2024-12-11T14:04:25.272Z] Total : 10876.31 42.49 888.11 0.00 10857.73 445.22 14930.81 00:23:32.224 Received shutdown signal, test time was about 15.000000 seconds 00:23:32.224 00:23:32.224 Latency(us) 00:23:32.224 [2024-12-11T14:04:25.272Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.224 [2024-12-11T14:04:25.272Z] =================================================================================================================== 00:23:32.224 [2024-12-11T14:04:25.272Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3211354 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3211354 /var/tmp/bdevperf.sock 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3211354 ']' 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:32.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:32.224 [2024-12-11 15:04:24.901943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:32.224 15:04:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:32.224 [2024-12-11 15:04:25.102509] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:32.224 15:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:32.482 NVMe0n1 00:23:32.482 15:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:32.740 00:23:32.740 15:04:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:33.306 00:23:33.306 15:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:33.306 15:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:33.306 15:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:33.564 15:04:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:36.844 15:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.845 15:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:36.845 15:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3212080 00:23:36.845 15:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:36.845 15:04:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3212080 00:23:37.779 { 00:23:37.779 "results": [ 00:23:37.779 { 00:23:37.779 "job": "NVMe0n1", 00:23:37.779 "core_mask": "0x1", 00:23:37.779 "workload": "verify", 00:23:37.779 "status": "finished", 00:23:37.779 "verify_range": { 00:23:37.779 "start": 0, 00:23:37.779 "length": 16384 00:23:37.779 }, 00:23:37.779 "queue_depth": 128, 00:23:37.779 "io_size": 4096, 00:23:37.779 "runtime": 1.008655, 00:23:37.779 "iops": 11013.676628777926, 00:23:37.779 "mibps": 43.022174331163775, 00:23:37.779 "io_failed": 0, 00:23:37.779 "io_timeout": 0, 00:23:37.779 "avg_latency_us": 11562.62805230385, 00:23:37.779 "min_latency_us": 2393.488695652174, 00:23:37.779 "max_latency_us": 9972.869565217392 00:23:37.779 } 00:23:37.779 ], 00:23:37.779 "core_count": 1 00:23:37.779 } 00:23:37.779 15:04:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:23:37.779 [2024-12-11 15:04:24.514575] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:37.779 [2024-12-11 15:04:24.514626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3211354 ] 00:23:37.779 [2024-12-11 15:04:24.591178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.779 [2024-12-11 15:04:24.628626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.779 [2024-12-11 15:04:26.454167] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:37.779 [2024-12-11 15:04:26.454212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.779 [2024-12-11 15:04:26.454223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.779 [2024-12-11 15:04:26.454232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.779 [2024-12-11 15:04:26.454239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.779 [2024-12-11 15:04:26.454246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.779 [2024-12-11 15:04:26.454253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.779 [2024-12-11 15:04:26.454261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:37.779 [2024-12-11 15:04:26.454268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.779 [2024-12-11 15:04:26.454275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:37.780 [2024-12-11 15:04:26.454300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:37.780 [2024-12-11 15:04:26.454314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a0fe0 (9): Bad file descriptor 00:23:37.780 [2024-12-11 15:04:26.506334] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:37.780 Running I/O for 1 seconds... 00:23:37.780 10944.00 IOPS, 42.75 MiB/s 00:23:37.780 Latency(us) 00:23:37.780 [2024-12-11T14:04:30.828Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.780 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:37.780 Verification LBA range: start 0x0 length 0x4000 00:23:37.780 NVMe0n1 : 1.01 11013.68 43.02 0.00 0.00 11562.63 2393.49 9972.87 00:23:37.780 [2024-12-11T14:04:30.828Z] =================================================================================================================== 00:23:37.780 [2024-12-11T14:04:30.828Z] Total : 11013.68 43.02 0.00 0.00 11562.63 2393.49 9972.87 00:23:38.038 15:04:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.038 15:04:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:38.038 15:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.295 15:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:38.295 15:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:38.553 15:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:38.811 15:04:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3211354 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3211354 ']' 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3211354 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3211354 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3211354' 00:23:42.093 killing process with pid 3211354 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3211354 00:23:42.093 15:04:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3211354 00:23:42.093 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:42.093 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.351 rmmod nvme_tcp 00:23:42.351 rmmod nvme_fabrics 00:23:42.351 rmmod nvme_keyring 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3208251 ']' 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3208251 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3208251 ']' 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3208251 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3208251 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3208251' 00:23:42.351 killing process with pid 3208251 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3208251 00:23:42.351 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3208251 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.610 15:04:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.613 15:04:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.613 00:23:44.613 real 0m38.052s 00:23:44.613 user 2m0.535s 00:23:44.613 sys 0m7.958s 00:23:44.613 15:04:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.613 15:04:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:44.613 ************************************ 00:23:44.613 END TEST nvmf_failover 00:23:44.613 ************************************ 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.878 ************************************ 00:23:44.878 START TEST nvmf_host_discovery 00:23:44.878 ************************************ 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:44.878 * Looking for test storage... 00:23:44.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.878 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:44.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.879 --rc genhtml_branch_coverage=1 00:23:44.879 --rc genhtml_function_coverage=1 00:23:44.879 --rc genhtml_legend=1 00:23:44.879 --rc geninfo_all_blocks=1 00:23:44.879 --rc geninfo_unexecuted_blocks=1 00:23:44.879 00:23:44.879 ' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:44.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.879 --rc genhtml_branch_coverage=1 00:23:44.879 --rc genhtml_function_coverage=1 00:23:44.879 --rc genhtml_legend=1 00:23:44.879 --rc geninfo_all_blocks=1 00:23:44.879 --rc geninfo_unexecuted_blocks=1 00:23:44.879 00:23:44.879 ' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:44.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.879 --rc genhtml_branch_coverage=1 00:23:44.879 --rc genhtml_function_coverage=1 00:23:44.879 --rc genhtml_legend=1 00:23:44.879 --rc geninfo_all_blocks=1 00:23:44.879 --rc geninfo_unexecuted_blocks=1 00:23:44.879 00:23:44.879 ' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:44.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.879 --rc genhtml_branch_coverage=1 00:23:44.879 --rc genhtml_function_coverage=1 00:23:44.879 --rc genhtml_legend=1 00:23:44.879 --rc geninfo_all_blocks=1 00:23:44.879 --rc geninfo_unexecuted_blocks=1 00:23:44.879 00:23:44.879 ' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.879 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.880 15:04:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:51.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:51.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:51.447 Found net devices under 0000:86:00.0: cvl_0_0 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:51.447 Found net devices under 0000:86:00.1: cvl_0_1 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.447 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.448 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.448 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:23:51.448 00:23:51.448 --- 10.0.0.2 ping statistics --- 00:23:51.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.448 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.448 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.448 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:23:51.448 00:23:51.448 --- 10.0.0.1 ping statistics --- 00:23:51.448 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.448 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3216519 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3216519 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3216519 ']' 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.448 15:04:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 [2024-12-11 15:04:43.840841] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:51.448 [2024-12-11 15:04:43.840884] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.448 [2024-12-11 15:04:43.921807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.448 [2024-12-11 15:04:43.961276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.448 [2024-12-11 15:04:43.961312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.448 [2024-12-11 15:04:43.961322] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.448 [2024-12-11 15:04:43.961327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.448 [2024-12-11 15:04:43.961332] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.448 [2024-12-11 15:04:43.961845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 [2024-12-11 15:04:44.098366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 [2024-12-11 15:04:44.110549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 null0 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 null1 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3216591 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3216591 /tmp/host.sock 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3216591 ']' 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:51.448 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 [2024-12-11 15:04:44.185925] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:23:51.448 [2024-12-11 15:04:44.185965] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3216591 ] 00:23:51.448 [2024-12-11 15:04:44.263587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.448 [2024-12-11 15:04:44.305347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:51.448 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.449 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 [2024-12-11 15:04:44.720095] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.707 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.965 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:23:51.966 15:04:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:52.532 [2024-12-11 15:04:45.466310] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:52.532 [2024-12-11 15:04:45.466333] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:52.532 [2024-12-11 15:04:45.466346] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.532 [2024-12-11 15:04:45.552611] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:52.790 [2024-12-11 15:04:45.727538] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:52.790 [2024-12-11 15:04:45.728205] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12859a0:1 started. 00:23:52.790 [2024-12-11 15:04:45.729580] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:52.790 [2024-12-11 15:04:45.729596] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:52.790 [2024-12-11 15:04:45.735226] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12859a0 was disconnected and freed. delete nvme_qpair. 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.048 15:04:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.048 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:53.307 [2024-12-11 15:04:46.129912] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1285d20:1 started. 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:53.307 [2024-12-11 15:04:46.177644] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1285d20 was disconnected and freed. delete nvme_qpair. 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.307 [2024-12-11 15:04:46.228228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:53.307 [2024-12-11 15:04:46.229222] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:53.307 [2024-12-11 15:04:46.229241] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.307 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.308 [2024-12-11 15:04:46.316499] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:53.308 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.566 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:53.566 15:04:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:23:53.566 [2024-12-11 15:04:46.421178] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:23:53.566 [2024-12-11 15:04:46.421212] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:53.566 [2024-12-11 15:04:46.421220] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:53.566 [2024-12-11 15:04:46.421225] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.500 [2024-12-11 15:04:47.448031] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.500 [2024-12-11 15:04:47.448054] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.500 [2024-12-11 15:04:47.451885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.500 [2024-12-11 15:04:47.451903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.500 [2024-12-11 15:04:47.451912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.500 [2024-12-11 15:04:47.451921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.500 [2024-12-11 15:04:47.451929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.500 [2024-12-11 15:04:47.451936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.500 [2024-12-11 15:04:47.451943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.500 [2024-12-11 15:04:47.451952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.500 [2024-12-11 15:04:47.451958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.500 [2024-12-11 15:04:47.461898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.500 [2024-12-11 15:04:47.471934] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.500 [2024-12-11 15:04:47.471947] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.500 [2024-12-11 15:04:47.471954] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.500 [2024-12-11 15:04:47.471959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.500 [2024-12-11 15:04:47.471976] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.500 [2024-12-11 15:04:47.472218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.500 [2024-12-11 15:04:47.472233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.500 [2024-12-11 15:04:47.472242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.500 [2024-12-11 15:04:47.472254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.500 [2024-12-11 15:04:47.472272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.500 [2024-12-11 15:04:47.472280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.500 [2024-12-11 15:04:47.472290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.500 [2024-12-11 15:04:47.472297] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.500 [2024-12-11 15:04:47.472303] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.500 [2024-12-11 15:04:47.472307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.500 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.501 [2024-12-11 15:04:47.482007] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.482018] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.482023] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.482028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.501 [2024-12-11 15:04:47.482042] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.482205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.501 [2024-12-11 15:04:47.482218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.501 [2024-12-11 15:04:47.482226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.501 [2024-12-11 15:04:47.482238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.501 [2024-12-11 15:04:47.482248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.501 [2024-12-11 15:04:47.482255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.501 [2024-12-11 15:04:47.482262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.501 [2024-12-11 15:04:47.482269] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.501 [2024-12-11 15:04:47.482277] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.501 [2024-12-11 15:04:47.482282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.501 [2024-12-11 15:04:47.492073] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.492087] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.492092] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.492096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.501 [2024-12-11 15:04:47.492110] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.492361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.501 [2024-12-11 15:04:47.492374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.501 [2024-12-11 15:04:47.492382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.501 [2024-12-11 15:04:47.492393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.501 [2024-12-11 15:04:47.492417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.501 [2024-12-11 15:04:47.492424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.501 [2024-12-11 15:04:47.492431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.501 [2024-12-11 15:04:47.492437] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.501 [2024-12-11 15:04:47.492442] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.501 [2024-12-11 15:04:47.492446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.501 [2024-12-11 15:04:47.502142] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.502161] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.502166] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.502170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.501 [2024-12-11 15:04:47.502184] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.501 [2024-12-11 15:04:47.502409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.501 [2024-12-11 15:04:47.502423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.501 [2024-12-11 15:04:47.502430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.501 [2024-12-11 15:04:47.502441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.501 [2024-12-11 15:04:47.502466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.501 [2024-12-11 15:04:47.502474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.501 [2024-12-11 15:04:47.502481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.501 [2024-12-11 15:04:47.502486] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.501 [2024-12-11 15:04:47.502491] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.501 [2024-12-11 15:04:47.502495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.501 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.501 [2024-12-11 15:04:47.512215] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.512228] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.512233] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.512237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.501 [2024-12-11 15:04:47.512251] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.512458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.501 [2024-12-11 15:04:47.512472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.501 [2024-12-11 15:04:47.512479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.501 [2024-12-11 15:04:47.512491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.501 [2024-12-11 15:04:47.512507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.501 [2024-12-11 15:04:47.512514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.501 [2024-12-11 15:04:47.512521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.501 [2024-12-11 15:04:47.512527] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.501 [2024-12-11 15:04:47.512531] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.501 [2024-12-11 15:04:47.512535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.501 [2024-12-11 15:04:47.522282] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.522301] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.522306] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.522310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.501 [2024-12-11 15:04:47.522324] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.522574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.501 [2024-12-11 15:04:47.522587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.501 [2024-12-11 15:04:47.522594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.501 [2024-12-11 15:04:47.522605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.501 [2024-12-11 15:04:47.522622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.501 [2024-12-11 15:04:47.522628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.501 [2024-12-11 15:04:47.522635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.501 [2024-12-11 15:04:47.522641] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.501 [2024-12-11 15:04:47.522645] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.501 [2024-12-11 15:04:47.522649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.501 [2024-12-11 15:04:47.532354] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:54.501 [2024-12-11 15:04:47.532364] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:54.501 [2024-12-11 15:04:47.532368] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:54.501 [2024-12-11 15:04:47.532372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:54.502 [2024-12-11 15:04:47.532385] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.502 [2024-12-11 15:04:47.532509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.502 [2024-12-11 15:04:47.532520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1257970 with addr=10.0.0.2, port=4420 00:23:54.502 [2024-12-11 15:04:47.532527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1257970 is same with the state(6) to be set 00:23:54.502 [2024-12-11 15:04:47.532537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1257970 (9): Bad file descriptor 00:23:54.502 [2024-12-11 15:04:47.532547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.502 [2024-12-11 15:04:47.532553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.502 [2024-12-11 15:04:47.532559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.502 [2024-12-11 15:04:47.532565] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.502 [2024-12-11 15:04:47.532569] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.502 [2024-12-11 15:04:47.532573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.502 [2024-12-11 15:04:47.533657] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:54.502 [2024-12-11 15:04:47.533671] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.502 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.760 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.018 15:04:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 [2024-12-11 15:04:48.824441] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:55.953 [2024-12-11 15:04:48.824458] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:55.953 [2024-12-11 15:04:48.824469] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.953 [2024-12-11 15:04:48.912732] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:56.212 [2024-12-11 15:04:49.019505] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:23:56.212 [2024-12-11 15:04:49.020115] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1290e70:1 started. 00:23:56.212 [2024-12-11 15:04:49.021707] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.212 [2024-12-11 15:04:49.021732] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.212 request: 00:23:56.212 { 00:23:56.212 "name": "nvme", 00:23:56.212 "trtype": "tcp", 00:23:56.212 "traddr": "10.0.0.2", 00:23:56.212 "adrfam": "ipv4", 00:23:56.212 "trsvcid": "8009", 00:23:56.212 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.212 "wait_for_attach": true, 00:23:56.212 "method": "bdev_nvme_start_discovery", 00:23:56.212 "req_id": 1 00:23:56.212 } 00:23:56.212 Got JSON-RPC error response 00:23:56.212 response: 00:23:56.212 { 00:23:56.212 "code": -17, 00:23:56.212 "message": "File exists" 00:23:56.212 } 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.212 [2024-12-11 15:04:49.064881] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1290e70 was disconnected and freed. delete nvme_qpair. 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.212 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.213 request: 00:23:56.213 { 00:23:56.213 "name": "nvme_second", 00:23:56.213 "trtype": "tcp", 00:23:56.213 "traddr": "10.0.0.2", 00:23:56.213 "adrfam": "ipv4", 00:23:56.213 "trsvcid": "8009", 00:23:56.213 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.213 "wait_for_attach": true, 00:23:56.213 "method": "bdev_nvme_start_discovery", 00:23:56.213 "req_id": 1 00:23:56.213 } 00:23:56.213 Got JSON-RPC error response 00:23:56.213 response: 00:23:56.213 { 00:23:56.213 "code": -17, 00:23:56.213 "message": "File exists" 00:23:56.213 } 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.213 15:04:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.588 [2024-12-11 15:04:50.253193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.588 [2024-12-11 15:04:50.253224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12857a0 with addr=10.0.0.2, port=8010 00:23:57.588 [2024-12-11 15:04:50.253242] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:57.588 [2024-12-11 15:04:50.253249] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:57.588 [2024-12-11 15:04:50.253257] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:58.522 [2024-12-11 15:04:51.255554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.522 [2024-12-11 15:04:51.255579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126fd20 with addr=10.0.0.2, port=8010 00:23:58.522 [2024-12-11 15:04:51.255591] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:58.522 [2024-12-11 15:04:51.255597] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.522 [2024-12-11 15:04:51.255604] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:59.457 [2024-12-11 15:04:52.257785] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:59.457 request: 00:23:59.457 { 00:23:59.457 "name": "nvme_second", 00:23:59.457 "trtype": "tcp", 00:23:59.457 "traddr": "10.0.0.2", 00:23:59.457 "adrfam": "ipv4", 00:23:59.457 "trsvcid": "8010", 00:23:59.457 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:59.457 "wait_for_attach": false, 00:23:59.457 "attach_timeout_ms": 3000, 00:23:59.457 "method": "bdev_nvme_start_discovery", 00:23:59.457 "req_id": 1 00:23:59.457 } 00:23:59.457 Got JSON-RPC error response 00:23:59.457 response: 00:23:59.457 { 00:23:59.457 "code": -110, 00:23:59.457 "message": "Connection timed out" 00:23:59.457 } 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3216591 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.457 rmmod nvme_tcp 00:23:59.457 rmmod nvme_fabrics 00:23:59.457 rmmod nvme_keyring 00:23:59.457 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3216519 ']' 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3216519 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3216519 ']' 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3216519 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3216519 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3216519' 00:23:59.458 killing process with pid 3216519 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3216519 00:23:59.458 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3216519 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.717 15:04:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.620 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.620 00:24:01.620 real 0m16.965s 00:24:01.620 user 0m20.092s 00:24:01.620 sys 0m5.781s 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.879 ************************************ 00:24:01.879 END TEST nvmf_host_discovery 00:24:01.879 ************************************ 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.879 ************************************ 00:24:01.879 START TEST nvmf_host_multipath_status 00:24:01.879 ************************************ 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.879 * Looking for test storage... 00:24:01.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:01.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.879 --rc genhtml_branch_coverage=1 00:24:01.879 --rc genhtml_function_coverage=1 00:24:01.879 --rc genhtml_legend=1 00:24:01.879 --rc geninfo_all_blocks=1 00:24:01.879 --rc geninfo_unexecuted_blocks=1 00:24:01.879 00:24:01.879 ' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:01.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.879 --rc genhtml_branch_coverage=1 00:24:01.879 --rc genhtml_function_coverage=1 00:24:01.879 --rc genhtml_legend=1 00:24:01.879 --rc geninfo_all_blocks=1 00:24:01.879 --rc geninfo_unexecuted_blocks=1 00:24:01.879 00:24:01.879 ' 00:24:01.879 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:01.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.879 --rc genhtml_branch_coverage=1 00:24:01.879 --rc genhtml_function_coverage=1 00:24:01.879 --rc genhtml_legend=1 00:24:01.879 --rc geninfo_all_blocks=1 00:24:01.879 --rc geninfo_unexecuted_blocks=1 00:24:01.879 00:24:01.880 ' 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:01.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.880 --rc genhtml_branch_coverage=1 00:24:01.880 --rc genhtml_function_coverage=1 00:24:01.880 --rc genhtml_legend=1 00:24:01.880 --rc geninfo_all_blocks=1 00:24:01.880 --rc geninfo_unexecuted_blocks=1 00:24:01.880 00:24:01.880 ' 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.880 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/bpftrace.sh 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.144 15:04:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:08.712 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:08.712 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:08.712 Found net devices under 0000:86:00.0: cvl_0_0 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:08.712 Found net devices under 0000:86:00.1: cvl_0_1 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:08.712 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:08.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:08.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:08.713 00:24:08.713 --- 10.0.0.2 ping statistics --- 00:24:08.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.713 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:08.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:08.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:24:08.713 00:24:08.713 --- 10.0.0.1 ping statistics --- 00:24:08.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:08.713 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3221607 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3221607 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3221607 ']' 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.713 15:05:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.713 [2024-12-11 15:05:00.904544] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:24:08.713 [2024-12-11 15:05:00.904597] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:08.713 [2024-12-11 15:05:00.983519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:08.713 [2024-12-11 15:05:01.023041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:08.713 [2024-12-11 15:05:01.023077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:08.713 [2024-12-11 15:05:01.023085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:08.713 [2024-12-11 15:05:01.023092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:08.713 [2024-12-11 15:05:01.023099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:08.713 [2024-12-11 15:05:01.024325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.713 [2024-12-11 15:05:01.024325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3221607 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:08.713 [2024-12-11 15:05:01.342186] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:08.713 Malloc0 00:24:08.713 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:08.972 15:05:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.972 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.230 [2024-12-11 15:05:02.175139] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.230 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:09.488 [2024-12-11 15:05:02.371588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3221864 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3221864 /var/tmp/bdevperf.sock 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3221864 ']' 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.488 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.747 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.747 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:09.747 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:10.005 15:05:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:10.263 Nvme0n1 00:24:10.263 15:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:10.521 Nvme0n1 00:24:10.521 15:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:10.521 15:05:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:13.049 15:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:13.049 15:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:13.049 15:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:13.049 15:05:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:13.984 15:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:13.984 15:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.984 15:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.984 15:05:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:14.242 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.242 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:14.242 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.242 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:14.499 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:14.499 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:14.499 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.499 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:14.756 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.756 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:14.756 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.756 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:15.014 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.014 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:15.014 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.014 15:05:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:15.014 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.014 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:15.014 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.014 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:15.271 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:15.272 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:15.272 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:15.529 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.786 15:05:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:16.718 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:16.718 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.718 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.718 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.975 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.975 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.975 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.975 15:05:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.233 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.233 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.233 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.233 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.490 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.490 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.490 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.490 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.747 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:18.004 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.004 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:18.004 15:05:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.260 15:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:18.517 15:05:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:19.448 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:19.449 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:19.449 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.449 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.706 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.706 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:19.706 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.706 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.963 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.963 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.963 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.963 15:05:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.220 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.478 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.735 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.735 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:20.735 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.991 15:05:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:21.249 15:05:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:22.186 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:22.186 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:22.186 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.186 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:22.443 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.443 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:22.443 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.443 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.700 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.700 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.701 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.701 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.701 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.701 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.958 15:05:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:23.215 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.215 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:23.215 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.215 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:23.473 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.473 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:23.473 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:23.730 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:23.987 15:05:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:24.917 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:24.917 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:24.917 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.917 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:25.174 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.174 15:05:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:25.174 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.174 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:25.175 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.175 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:25.175 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.175 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:25.431 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.431 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:25.431 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.432 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.688 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.688 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:25.688 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.688 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.946 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.946 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:25.946 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.946 15:05:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.202 15:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.202 15:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:26.202 15:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:26.202 15:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:26.458 15:05:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:27.388 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:27.388 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:27.388 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.388 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.645 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.645 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:27.645 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.645 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.901 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.901 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.901 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.901 15:05:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:28.158 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.158 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:28.158 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.158 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:28.415 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.415 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:28.415 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.415 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:28.672 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:28.672 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:28.672 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.672 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:28.930 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.930 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:28.930 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:28.930 15:05:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:29.187 15:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:29.445 15:05:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:30.376 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:30.376 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:30.376 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.376 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:30.633 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.633 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:30.633 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.633 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:30.890 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.890 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:30.890 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.890 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.147 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.147 15:05:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.147 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:31.147 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.405 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:31.662 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.662 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:31.662 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:31.921 15:05:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:32.201 15:05:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:33.217 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:33.217 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:33.217 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.217 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.474 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:33.474 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.475 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:33.731 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.731 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:33.731 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.731 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:33.988 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.988 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:33.988 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.988 15:05:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.244 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.244 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.244 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.244 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.500 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.500 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:34.500 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.757 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:34.757 15:05:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:36.128 15:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:36.128 15:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.128 15:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.128 15:05:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.128 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.128 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:36.128 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.128 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.385 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.385 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.385 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.385 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.385 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.642 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:36.900 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.900 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:36.900 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.900 15:05:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:37.157 15:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.157 15:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:37.157 15:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:37.415 15:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:37.672 15:05:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:38.604 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:38.604 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:38.604 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.604 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.861 15:05:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.118 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.118 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:39.118 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.118 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:39.376 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.376 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:39.376 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.376 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:39.632 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.632 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:39.633 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.633 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3221864 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3221864 ']' 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3221864 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3221864 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3221864' 00:24:39.889 killing process with pid 3221864 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3221864 00:24:39.889 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3221864 00:24:39.889 { 00:24:39.889 "results": [ 00:24:39.889 { 00:24:39.889 "job": "Nvme0n1", 00:24:39.889 "core_mask": "0x4", 00:24:39.889 "workload": "verify", 00:24:39.889 "status": "terminated", 00:24:39.889 "verify_range": { 00:24:39.889 "start": 0, 00:24:39.889 "length": 16384 00:24:39.889 }, 00:24:39.889 "queue_depth": 128, 00:24:39.889 "io_size": 4096, 00:24:39.889 "runtime": 29.108784, 00:24:39.889 "iops": 10431.31860128544, 00:24:39.889 "mibps": 40.74733828627125, 00:24:39.889 "io_failed": 0, 00:24:39.889 "io_timeout": 0, 00:24:39.889 "avg_latency_us": 12251.049438887687, 00:24:39.889 "min_latency_us": 804.9530434782608, 00:24:39.889 "max_latency_us": 3019898.88 00:24:39.889 } 00:24:39.889 ], 00:24:39.889 "core_count": 1 00:24:39.889 } 00:24:40.149 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3221864 00:24:40.149 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:40.149 [2024-12-11 15:05:02.445395] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:24:40.149 [2024-12-11 15:05:02.445456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3221864 ] 00:24:40.149 [2024-12-11 15:05:02.518631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.149 [2024-12-11 15:05:02.559682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.149 Running I/O for 90 seconds... 00:24:40.149 10984.00 IOPS, 42.91 MiB/s [2024-12-11T14:05:33.197Z] 11108.00 IOPS, 43.39 MiB/s [2024-12-11T14:05:33.197Z] 11110.33 IOPS, 43.40 MiB/s [2024-12-11T14:05:33.197Z] 11110.50 IOPS, 43.40 MiB/s [2024-12-11T14:05:33.197Z] 11133.20 IOPS, 43.49 MiB/s [2024-12-11T14:05:33.197Z] 11171.67 IOPS, 43.64 MiB/s [2024-12-11T14:05:33.198Z] 11177.14 IOPS, 43.66 MiB/s [2024-12-11T14:05:33.198Z] 11182.75 IOPS, 43.68 MiB/s [2024-12-11T14:05:33.198Z] 11189.67 IOPS, 43.71 MiB/s [2024-12-11T14:05:33.198Z] 11184.40 IOPS, 43.69 MiB/s [2024-12-11T14:05:33.198Z] 11196.73 IOPS, 43.74 MiB/s [2024-12-11T14:05:33.198Z] 11190.58 IOPS, 43.71 MiB/s [2024-12-11T14:05:33.198Z] [2024-12-11 15:05:16.572368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:106488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:106496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:106512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.572878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.572885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.150 [2024-12-11 15:05:16.573367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.150 [2024-12-11 15:05:16.573388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.150 [2024-12-11 15:05:16.573412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:106104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.150 [2024-12-11 15:05:16.573432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.150 [2024-12-11 15:05:16.573453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:40.150 [2024-12-11 15:05:16.573467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:106120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.150 [2024-12-11 15:05:16.573474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.573494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.573515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.573984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.573991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.151 [2024-12-11 15:05:16.574098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:106224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:106232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.151 [2024-12-11 15:05:16.574398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:40.151 [2024-12-11 15:05:16.574413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:106272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:106304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:106336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:106368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:106392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.152 [2024-12-11 15:05:16.574828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.574978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.574984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.152 [2024-12-11 15:05:16.575404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:40.152 [2024-12-11 15:05:16.575421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:16.575428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:16.575452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:106408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:106432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:106440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:16.575649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:16.575667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:16.575673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:40.153 11102.54 IOPS, 43.37 MiB/s [2024-12-11T14:05:33.201Z] 10309.50 IOPS, 40.27 MiB/s [2024-12-11T14:05:33.201Z] 9622.20 IOPS, 37.59 MiB/s [2024-12-11T14:05:33.201Z] 9091.62 IOPS, 35.51 MiB/s [2024-12-11T14:05:33.201Z] 9212.88 IOPS, 35.99 MiB/s [2024-12-11T14:05:33.201Z] 9333.83 IOPS, 36.46 MiB/s [2024-12-11T14:05:33.201Z] 9487.05 IOPS, 37.06 MiB/s [2024-12-11T14:05:33.201Z] 9678.90 IOPS, 37.81 MiB/s [2024-12-11T14:05:33.201Z] 9854.00 IOPS, 38.49 MiB/s [2024-12-11T14:05:33.201Z] 9932.91 IOPS, 38.80 MiB/s [2024-12-11T14:05:33.201Z] 9985.00 IOPS, 39.00 MiB/s [2024-12-11T14:05:33.201Z] 10035.54 IOPS, 39.20 MiB/s [2024-12-11T14:05:33.201Z] 10159.20 IOPS, 39.68 MiB/s [2024-12-11T14:05:33.201Z] 10275.69 IOPS, 40.14 MiB/s [2024-12-11T14:05:33.201Z] [2024-12-11 15:05:30.440114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:116752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:117016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.440415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.440421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.153 [2024-12-11 15:05:30.441236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:40.153 [2024-12-11 15:05:30.441727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.153 [2024-12-11 15:05:30.441737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.441750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.441757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.154 [2024-12-11 15:05:30.443536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.154 [2024-12-11 15:05:30.443559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.154 [2024-12-11 15:05:30.443767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.154 [2024-12-11 15:05:30.443774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:40.154 10361.59 IOPS, 40.47 MiB/s [2024-12-11T14:05:33.202Z] 10397.64 IOPS, 40.62 MiB/s [2024-12-11T14:05:33.202Z] 10430.03 IOPS, 40.74 MiB/s [2024-12-11T14:05:33.202Z] Received shutdown signal, test time was about 29.109419 seconds 00:24:40.154 00:24:40.154 Latency(us) 00:24:40.154 [2024-12-11T14:05:33.202Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.154 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:40.154 Verification LBA range: start 0x0 length 0x4000 00:24:40.154 Nvme0n1 : 29.11 10431.32 40.75 0.00 0.00 12251.05 804.95 3019898.88 00:24:40.154 [2024-12-11T14:05:33.202Z] =================================================================================================================== 00:24:40.154 [2024-12-11T14:05:33.202Z] Total : 10431.32 40.75 0.00 0.00 12251.05 804.95 3019898.88 00:24:40.154 15:05:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.154 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.412 rmmod nvme_tcp 00:24:40.412 rmmod nvme_fabrics 00:24:40.412 rmmod nvme_keyring 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3221607 ']' 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3221607 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3221607 ']' 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3221607 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3221607 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3221607' 00:24:40.412 killing process with pid 3221607 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3221607 00:24:40.412 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3221607 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.671 15:05:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.576 00:24:42.576 real 0m40.820s 00:24:42.576 user 1m50.990s 00:24:42.576 sys 0m11.652s 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:42.576 ************************************ 00:24:42.576 END TEST nvmf_host_multipath_status 00:24:42.576 ************************************ 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:42.576 15:05:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.836 ************************************ 00:24:42.836 START TEST nvmf_discovery_remove_ifc 00:24:42.836 ************************************ 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:42.836 * Looking for test storage... 00:24:42.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.836 --rc genhtml_branch_coverage=1 00:24:42.836 --rc genhtml_function_coverage=1 00:24:42.836 --rc genhtml_legend=1 00:24:42.836 --rc geninfo_all_blocks=1 00:24:42.836 --rc geninfo_unexecuted_blocks=1 00:24:42.836 00:24:42.836 ' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.836 --rc genhtml_branch_coverage=1 00:24:42.836 --rc genhtml_function_coverage=1 00:24:42.836 --rc genhtml_legend=1 00:24:42.836 --rc geninfo_all_blocks=1 00:24:42.836 --rc geninfo_unexecuted_blocks=1 00:24:42.836 00:24:42.836 ' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.836 --rc genhtml_branch_coverage=1 00:24:42.836 --rc genhtml_function_coverage=1 00:24:42.836 --rc genhtml_legend=1 00:24:42.836 --rc geninfo_all_blocks=1 00:24:42.836 --rc geninfo_unexecuted_blocks=1 00:24:42.836 00:24:42.836 ' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:42.836 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.836 --rc genhtml_branch_coverage=1 00:24:42.836 --rc genhtml_function_coverage=1 00:24:42.836 --rc genhtml_legend=1 00:24:42.836 --rc geninfo_all_blocks=1 00:24:42.836 --rc geninfo_unexecuted_blocks=1 00:24:42.836 00:24:42.836 ' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:42.836 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.837 15:05:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:49.407 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:49.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.407 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:49.408 Found net devices under 0000:86:00.0: cvl_0_0 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:49.408 Found net devices under 0000:86:00.1: cvl_0_1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:49.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:24:49.408 00:24:49.408 --- 10.0.0.2 ping statistics --- 00:24:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.408 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:24:49.408 00:24:49.408 --- 10.0.0.1 ping statistics --- 00:24:49.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.408 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3230621 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3230621 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3230621 ']' 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.408 15:05:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.408 [2024-12-11 15:05:41.827279] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:24:49.408 [2024-12-11 15:05:41.827329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.408 [2024-12-11 15:05:41.907475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.408 [2024-12-11 15:05:41.947306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.408 [2024-12-11 15:05:41.947340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.408 [2024-12-11 15:05:41.947348] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.408 [2024-12-11 15:05:41.947354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.408 [2024-12-11 15:05:41.947359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.408 [2024-12-11 15:05:41.947905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.408 [2024-12-11 15:05:42.092673] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:49.408 [2024-12-11 15:05:42.100854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:49.408 null0 00:24:49.408 [2024-12-11 15:05:42.132840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3230645 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3230645 /tmp/host.sock 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3230645 ']' 00:24:49.408 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:49.409 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 [2024-12-11 15:05:42.200102] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:24:49.409 [2024-12-11 15:05:42.200147] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230645 ] 00:24:49.409 [2024-12-11 15:05:42.275323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.409 [2024-12-11 15:05:42.315296] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.409 15:05:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.784 [2024-12-11 15:05:43.499324] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:50.784 [2024-12-11 15:05:43.499348] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:50.784 [2024-12-11 15:05:43.499362] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:50.784 [2024-12-11 15:05:43.587622] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:50.784 [2024-12-11 15:05:43.689395] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:50.784 [2024-12-11 15:05:43.690204] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1574800:1 started. 00:24:50.784 [2024-12-11 15:05:43.691532] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:50.784 [2024-12-11 15:05:43.691573] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:50.784 [2024-12-11 15:05:43.691593] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:50.784 [2024-12-11 15:05:43.691607] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:50.784 [2024-12-11 15:05:43.691627] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:50.784 [2024-12-11 15:05:43.698081] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1574800 was disconnected and freed. delete nvme_qpair. 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:50.784 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.043 15:05:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.978 15:05:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.916 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.174 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.174 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.174 15:05:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.108 15:05:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.108 15:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.108 15:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.108 15:05:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.043 15:05:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.418 15:05:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.418 [2024-12-11 15:05:49.143073] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:56.418 [2024-12-11 15:05:49.143115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.418 [2024-12-11 15:05:49.143130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.418 [2024-12-11 15:05:49.143142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.418 [2024-12-11 15:05:49.143151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.418 [2024-12-11 15:05:49.143166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.418 [2024-12-11 15:05:49.143175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.418 [2024-12-11 15:05:49.143186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.418 [2024-12-11 15:05:49.143195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.418 [2024-12-11 15:05:49.143207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.418 [2024-12-11 15:05:49.143217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.418 [2024-12-11 15:05:49.143226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550fe0 is same with the state(6) to be set 00:24:56.418 [2024-12-11 15:05:49.153095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1550fe0 (9): Bad file descriptor 00:24:56.418 [2024-12-11 15:05:49.163134] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:56.418 [2024-12-11 15:05:49.163147] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:56.418 [2024-12-11 15:05:49.163154] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:56.418 [2024-12-11 15:05:49.163162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:56.418 [2024-12-11 15:05:49.163189] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.355 [2024-12-11 15:05:50.202313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:57.355 [2024-12-11 15:05:50.202403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1550fe0 with addr=10.0.0.2, port=4420 00:24:57.355 [2024-12-11 15:05:50.202447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1550fe0 is same with the state(6) to be set 00:24:57.355 [2024-12-11 15:05:50.202521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1550fe0 (9): Bad file descriptor 00:24:57.355 [2024-12-11 15:05:50.203605] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:57.355 [2024-12-11 15:05:50.203687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:57.355 [2024-12-11 15:05:50.203726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:57.355 [2024-12-11 15:05:50.203761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:57.355 [2024-12-11 15:05:50.203791] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:57.355 [2024-12-11 15:05:50.203818] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:57.355 [2024-12-11 15:05:50.203841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:57.355 [2024-12-11 15:05:50.203875] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:57.355 [2024-12-11 15:05:50.203898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.355 15:05:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.290 [2024-12-11 15:05:51.206446] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:58.290 [2024-12-11 15:05:51.206470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:58.290 [2024-12-11 15:05:51.206483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:58.290 [2024-12-11 15:05:51.206493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:58.290 [2024-12-11 15:05:51.206503] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:58.290 [2024-12-11 15:05:51.206513] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:58.290 [2024-12-11 15:05:51.206520] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:58.290 [2024-12-11 15:05:51.206526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:58.290 [2024-12-11 15:05:51.206556] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:58.290 [2024-12-11 15:05:51.206586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.290 [2024-12-11 15:05:51.206600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.290 [2024-12-11 15:05:51.206615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.290 [2024-12-11 15:05:51.206625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.290 [2024-12-11 15:05:51.206637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.290 [2024-12-11 15:05:51.206647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.290 [2024-12-11 15:05:51.206657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.290 [2024-12-11 15:05:51.206671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.290 [2024-12-11 15:05:51.206683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.290 [2024-12-11 15:05:51.206692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.290 [2024-12-11 15:05:51.206701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:58.290 [2024-12-11 15:05:51.206884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15402f0 (9): Bad file descriptor 00:24:58.290 [2024-12-11 15:05:51.207895] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:58.290 [2024-12-11 15:05:51.207908] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.290 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:58.548 15:05:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:59.483 15:05:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:00.416 [2024-12-11 15:05:53.264644] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:00.416 [2024-12-11 15:05:53.264662] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:00.416 [2024-12-11 15:05:53.264674] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:00.416 [2024-12-11 15:05:53.393065] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:00.416 [2024-12-11 15:05:53.453697] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:00.416 [2024-12-11 15:05:53.454350] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x155c2a0:1 started. 00:25:00.416 [2024-12-11 15:05:53.455411] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:00.416 [2024-12-11 15:05:53.455445] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:00.416 [2024-12-11 15:05:53.455463] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:00.416 [2024-12-11 15:05:53.455476] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:00.416 [2024-12-11 15:05:53.455484] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:00.416 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:00.675 [2024-12-11 15:05:53.463288] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x155c2a0 was disconnected and freed. delete nvme_qpair. 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3230645 ']' 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230645' 00:25:00.675 killing process with pid 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3230645 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.675 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.675 rmmod nvme_tcp 00:25:00.934 rmmod nvme_fabrics 00:25:00.934 rmmod nvme_keyring 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3230621 ']' 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3230621 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3230621 ']' 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3230621 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3230621 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3230621' 00:25:00.934 killing process with pid 3230621 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3230621 00:25:00.934 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3230621 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.193 15:05:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:03.102 00:25:03.102 real 0m20.419s 00:25:03.102 user 0m24.584s 00:25:03.102 sys 0m5.862s 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 ************************************ 00:25:03.102 END TEST nvmf_discovery_remove_ifc 00:25:03.102 ************************************ 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.102 ************************************ 00:25:03.102 START TEST nvmf_identify_kernel_target 00:25:03.102 ************************************ 00:25:03.102 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:03.361 * Looking for test storage... 00:25:03.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:03.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.361 --rc genhtml_branch_coverage=1 00:25:03.361 --rc genhtml_function_coverage=1 00:25:03.361 --rc genhtml_legend=1 00:25:03.361 --rc geninfo_all_blocks=1 00:25:03.361 --rc geninfo_unexecuted_blocks=1 00:25:03.361 00:25:03.361 ' 00:25:03.361 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:03.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.361 --rc genhtml_branch_coverage=1 00:25:03.361 --rc genhtml_function_coverage=1 00:25:03.361 --rc genhtml_legend=1 00:25:03.361 --rc geninfo_all_blocks=1 00:25:03.361 --rc geninfo_unexecuted_blocks=1 00:25:03.361 00:25:03.361 ' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:03.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.362 --rc genhtml_branch_coverage=1 00:25:03.362 --rc genhtml_function_coverage=1 00:25:03.362 --rc genhtml_legend=1 00:25:03.362 --rc geninfo_all_blocks=1 00:25:03.362 --rc geninfo_unexecuted_blocks=1 00:25:03.362 00:25:03.362 ' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:03.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.362 --rc genhtml_branch_coverage=1 00:25:03.362 --rc genhtml_function_coverage=1 00:25:03.362 --rc genhtml_legend=1 00:25:03.362 --rc geninfo_all_blocks=1 00:25:03.362 --rc geninfo_unexecuted_blocks=1 00:25:03.362 00:25:03.362 ' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.362 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.362 15:05:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:09.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:09.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:09.932 Found net devices under 0000:86:00.0: cvl_0_0 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:09.932 Found net devices under 0000:86:00.1: cvl_0_1 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.932 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.933 15:06:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.933 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.933 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:25:09.933 00:25:09.933 --- 10.0.0.2 ping statistics --- 00:25:09.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.933 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.933 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.933 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:25:09.933 00:25:09.933 --- 10.0.0.1 ping statistics --- 00:25:09.933 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.933 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:09.933 15:06:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:25:12.470 Waiting for block devices as requested 00:25:12.470 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.470 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.470 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.470 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.729 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.729 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.729 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.729 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:12.988 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:12.988 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.988 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.248 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.248 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.248 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.248 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.507 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.507 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:13.507 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:25:13.766 No valid GPT data, bailing 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:13.766 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:13.767 00:25:13.767 Discovery Log Number of Records 2, Generation counter 2 00:25:13.767 =====Discovery Log Entry 0====== 00:25:13.767 trtype: tcp 00:25:13.767 adrfam: ipv4 00:25:13.767 subtype: current discovery subsystem 00:25:13.767 treq: not specified, sq flow control disable supported 00:25:13.767 portid: 1 00:25:13.767 trsvcid: 4420 00:25:13.767 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:13.767 traddr: 10.0.0.1 00:25:13.767 eflags: none 00:25:13.767 sectype: none 00:25:13.767 =====Discovery Log Entry 1====== 00:25:13.767 trtype: tcp 00:25:13.767 adrfam: ipv4 00:25:13.767 subtype: nvme subsystem 00:25:13.767 treq: not specified, sq flow control disable supported 00:25:13.767 portid: 1 00:25:13.767 trsvcid: 4420 00:25:13.767 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:13.767 traddr: 10.0.0.1 00:25:13.767 eflags: none 00:25:13.767 sectype: none 00:25:13.767 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:13.767 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:13.767 ===================================================== 00:25:13.767 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:13.767 ===================================================== 00:25:13.767 Controller Capabilities/Features 00:25:13.767 ================================ 00:25:13.767 Vendor ID: 0000 00:25:13.767 Subsystem Vendor ID: 0000 00:25:13.767 Serial Number: 89e456777e6cdcc6a400 00:25:13.767 Model Number: Linux 00:25:13.767 Firmware Version: 6.8.9-20 00:25:13.767 Recommended Arb Burst: 0 00:25:13.767 IEEE OUI Identifier: 00 00 00 00:25:13.767 Multi-path I/O 00:25:13.767 May have multiple subsystem ports: No 00:25:13.767 May have multiple controllers: No 00:25:13.767 Associated with SR-IOV VF: No 00:25:13.767 Max Data Transfer Size: Unlimited 00:25:13.767 Max Number of Namespaces: 0 00:25:13.767 Max Number of I/O Queues: 1024 00:25:13.767 NVMe Specification Version (VS): 1.3 00:25:13.767 NVMe Specification Version (Identify): 1.3 00:25:13.767 Maximum Queue Entries: 1024 00:25:13.767 Contiguous Queues Required: No 00:25:13.767 Arbitration Mechanisms Supported 00:25:13.767 Weighted Round Robin: Not Supported 00:25:13.767 Vendor Specific: Not Supported 00:25:13.767 Reset Timeout: 7500 ms 00:25:13.767 Doorbell Stride: 4 bytes 00:25:13.767 NVM Subsystem Reset: Not Supported 00:25:13.767 Command Sets Supported 00:25:13.767 NVM Command Set: Supported 00:25:13.767 Boot Partition: Not Supported 00:25:13.767 Memory Page Size Minimum: 4096 bytes 00:25:13.767 Memory Page Size Maximum: 4096 bytes 00:25:13.767 Persistent Memory Region: Not Supported 00:25:13.767 Optional Asynchronous Events Supported 00:25:13.767 Namespace Attribute Notices: Not Supported 00:25:13.767 Firmware Activation Notices: Not Supported 00:25:13.767 ANA Change Notices: Not Supported 00:25:13.767 PLE Aggregate Log Change Notices: Not Supported 00:25:13.767 LBA Status Info Alert Notices: Not Supported 00:25:13.767 EGE Aggregate Log Change Notices: Not Supported 00:25:13.767 Normal NVM Subsystem Shutdown event: Not Supported 00:25:13.767 Zone Descriptor Change Notices: Not Supported 00:25:13.767 Discovery Log Change Notices: Supported 00:25:13.767 Controller Attributes 00:25:13.767 128-bit Host Identifier: Not Supported 00:25:13.767 Non-Operational Permissive Mode: Not Supported 00:25:13.767 NVM Sets: Not Supported 00:25:13.767 Read Recovery Levels: Not Supported 00:25:13.767 Endurance Groups: Not Supported 00:25:13.767 Predictable Latency Mode: Not Supported 00:25:13.767 Traffic Based Keep ALive: Not Supported 00:25:13.767 Namespace Granularity: Not Supported 00:25:13.767 SQ Associations: Not Supported 00:25:13.767 UUID List: Not Supported 00:25:13.767 Multi-Domain Subsystem: Not Supported 00:25:13.767 Fixed Capacity Management: Not Supported 00:25:13.767 Variable Capacity Management: Not Supported 00:25:13.767 Delete Endurance Group: Not Supported 00:25:13.767 Delete NVM Set: Not Supported 00:25:13.767 Extended LBA Formats Supported: Not Supported 00:25:13.767 Flexible Data Placement Supported: Not Supported 00:25:13.767 00:25:13.767 Controller Memory Buffer Support 00:25:13.767 ================================ 00:25:13.767 Supported: No 00:25:13.767 00:25:13.767 Persistent Memory Region Support 00:25:13.767 ================================ 00:25:13.767 Supported: No 00:25:13.767 00:25:13.767 Admin Command Set Attributes 00:25:13.767 ============================ 00:25:13.767 Security Send/Receive: Not Supported 00:25:13.767 Format NVM: Not Supported 00:25:13.767 Firmware Activate/Download: Not Supported 00:25:13.767 Namespace Management: Not Supported 00:25:13.767 Device Self-Test: Not Supported 00:25:13.767 Directives: Not Supported 00:25:13.767 NVMe-MI: Not Supported 00:25:13.767 Virtualization Management: Not Supported 00:25:13.767 Doorbell Buffer Config: Not Supported 00:25:13.767 Get LBA Status Capability: Not Supported 00:25:13.767 Command & Feature Lockdown Capability: Not Supported 00:25:13.767 Abort Command Limit: 1 00:25:13.767 Async Event Request Limit: 1 00:25:13.767 Number of Firmware Slots: N/A 00:25:13.767 Firmware Slot 1 Read-Only: N/A 00:25:13.767 Firmware Activation Without Reset: N/A 00:25:13.767 Multiple Update Detection Support: N/A 00:25:13.767 Firmware Update Granularity: No Information Provided 00:25:13.767 Per-Namespace SMART Log: No 00:25:13.767 Asymmetric Namespace Access Log Page: Not Supported 00:25:13.767 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:13.767 Command Effects Log Page: Not Supported 00:25:13.767 Get Log Page Extended Data: Supported 00:25:13.767 Telemetry Log Pages: Not Supported 00:25:13.767 Persistent Event Log Pages: Not Supported 00:25:13.767 Supported Log Pages Log Page: May Support 00:25:13.767 Commands Supported & Effects Log Page: Not Supported 00:25:13.767 Feature Identifiers & Effects Log Page:May Support 00:25:13.767 NVMe-MI Commands & Effects Log Page: May Support 00:25:13.767 Data Area 4 for Telemetry Log: Not Supported 00:25:13.767 Error Log Page Entries Supported: 1 00:25:13.767 Keep Alive: Not Supported 00:25:13.767 00:25:13.767 NVM Command Set Attributes 00:25:13.767 ========================== 00:25:13.767 Submission Queue Entry Size 00:25:13.767 Max: 1 00:25:13.767 Min: 1 00:25:13.767 Completion Queue Entry Size 00:25:13.767 Max: 1 00:25:13.767 Min: 1 00:25:13.767 Number of Namespaces: 0 00:25:13.767 Compare Command: Not Supported 00:25:13.767 Write Uncorrectable Command: Not Supported 00:25:13.767 Dataset Management Command: Not Supported 00:25:13.767 Write Zeroes Command: Not Supported 00:25:13.767 Set Features Save Field: Not Supported 00:25:13.767 Reservations: Not Supported 00:25:13.767 Timestamp: Not Supported 00:25:13.767 Copy: Not Supported 00:25:13.767 Volatile Write Cache: Not Present 00:25:13.767 Atomic Write Unit (Normal): 1 00:25:13.767 Atomic Write Unit (PFail): 1 00:25:13.767 Atomic Compare & Write Unit: 1 00:25:13.767 Fused Compare & Write: Not Supported 00:25:13.767 Scatter-Gather List 00:25:13.767 SGL Command Set: Supported 00:25:13.767 SGL Keyed: Not Supported 00:25:13.767 SGL Bit Bucket Descriptor: Not Supported 00:25:13.767 SGL Metadata Pointer: Not Supported 00:25:13.767 Oversized SGL: Not Supported 00:25:13.767 SGL Metadata Address: Not Supported 00:25:13.767 SGL Offset: Supported 00:25:13.767 Transport SGL Data Block: Not Supported 00:25:13.767 Replay Protected Memory Block: Not Supported 00:25:13.767 00:25:13.767 Firmware Slot Information 00:25:13.767 ========================= 00:25:13.767 Active slot: 0 00:25:13.767 00:25:13.767 00:25:13.767 Error Log 00:25:13.767 ========= 00:25:13.767 00:25:13.767 Active Namespaces 00:25:13.767 ================= 00:25:13.767 Discovery Log Page 00:25:13.767 ================== 00:25:13.767 Generation Counter: 2 00:25:13.767 Number of Records: 2 00:25:13.767 Record Format: 0 00:25:13.767 00:25:13.767 Discovery Log Entry 0 00:25:13.767 ---------------------- 00:25:13.767 Transport Type: 3 (TCP) 00:25:13.767 Address Family: 1 (IPv4) 00:25:13.768 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:13.768 Entry Flags: 00:25:13.768 Duplicate Returned Information: 0 00:25:13.768 Explicit Persistent Connection Support for Discovery: 0 00:25:13.768 Transport Requirements: 00:25:13.768 Secure Channel: Not Specified 00:25:13.768 Port ID: 1 (0x0001) 00:25:13.768 Controller ID: 65535 (0xffff) 00:25:13.768 Admin Max SQ Size: 32 00:25:13.768 Transport Service Identifier: 4420 00:25:13.768 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:13.768 Transport Address: 10.0.0.1 00:25:13.768 Discovery Log Entry 1 00:25:13.768 ---------------------- 00:25:13.768 Transport Type: 3 (TCP) 00:25:13.768 Address Family: 1 (IPv4) 00:25:13.768 Subsystem Type: 2 (NVM Subsystem) 00:25:13.768 Entry Flags: 00:25:13.768 Duplicate Returned Information: 0 00:25:13.768 Explicit Persistent Connection Support for Discovery: 0 00:25:13.768 Transport Requirements: 00:25:13.768 Secure Channel: Not Specified 00:25:13.768 Port ID: 1 (0x0001) 00:25:13.768 Controller ID: 65535 (0xffff) 00:25:13.768 Admin Max SQ Size: 32 00:25:13.768 Transport Service Identifier: 4420 00:25:13.768 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:13.768 Transport Address: 10.0.0.1 00:25:13.768 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:14.027 get_feature(0x01) failed 00:25:14.027 get_feature(0x02) failed 00:25:14.027 get_feature(0x04) failed 00:25:14.027 ===================================================== 00:25:14.027 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:14.027 ===================================================== 00:25:14.027 Controller Capabilities/Features 00:25:14.027 ================================ 00:25:14.027 Vendor ID: 0000 00:25:14.027 Subsystem Vendor ID: 0000 00:25:14.027 Serial Number: 4144827f847c4a03bb0b 00:25:14.027 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:14.027 Firmware Version: 6.8.9-20 00:25:14.027 Recommended Arb Burst: 6 00:25:14.027 IEEE OUI Identifier: 00 00 00 00:25:14.027 Multi-path I/O 00:25:14.027 May have multiple subsystem ports: Yes 00:25:14.027 May have multiple controllers: Yes 00:25:14.027 Associated with SR-IOV VF: No 00:25:14.027 Max Data Transfer Size: Unlimited 00:25:14.028 Max Number of Namespaces: 1024 00:25:14.028 Max Number of I/O Queues: 128 00:25:14.028 NVMe Specification Version (VS): 1.3 00:25:14.028 NVMe Specification Version (Identify): 1.3 00:25:14.028 Maximum Queue Entries: 1024 00:25:14.028 Contiguous Queues Required: No 00:25:14.028 Arbitration Mechanisms Supported 00:25:14.028 Weighted Round Robin: Not Supported 00:25:14.028 Vendor Specific: Not Supported 00:25:14.028 Reset Timeout: 7500 ms 00:25:14.028 Doorbell Stride: 4 bytes 00:25:14.028 NVM Subsystem Reset: Not Supported 00:25:14.028 Command Sets Supported 00:25:14.028 NVM Command Set: Supported 00:25:14.028 Boot Partition: Not Supported 00:25:14.028 Memory Page Size Minimum: 4096 bytes 00:25:14.028 Memory Page Size Maximum: 4096 bytes 00:25:14.028 Persistent Memory Region: Not Supported 00:25:14.028 Optional Asynchronous Events Supported 00:25:14.028 Namespace Attribute Notices: Supported 00:25:14.028 Firmware Activation Notices: Not Supported 00:25:14.028 ANA Change Notices: Supported 00:25:14.028 PLE Aggregate Log Change Notices: Not Supported 00:25:14.028 LBA Status Info Alert Notices: Not Supported 00:25:14.028 EGE Aggregate Log Change Notices: Not Supported 00:25:14.028 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.028 Zone Descriptor Change Notices: Not Supported 00:25:14.028 Discovery Log Change Notices: Not Supported 00:25:14.028 Controller Attributes 00:25:14.028 128-bit Host Identifier: Supported 00:25:14.028 Non-Operational Permissive Mode: Not Supported 00:25:14.028 NVM Sets: Not Supported 00:25:14.028 Read Recovery Levels: Not Supported 00:25:14.028 Endurance Groups: Not Supported 00:25:14.028 Predictable Latency Mode: Not Supported 00:25:14.028 Traffic Based Keep ALive: Supported 00:25:14.028 Namespace Granularity: Not Supported 00:25:14.028 SQ Associations: Not Supported 00:25:14.028 UUID List: Not Supported 00:25:14.028 Multi-Domain Subsystem: Not Supported 00:25:14.028 Fixed Capacity Management: Not Supported 00:25:14.028 Variable Capacity Management: Not Supported 00:25:14.028 Delete Endurance Group: Not Supported 00:25:14.028 Delete NVM Set: Not Supported 00:25:14.028 Extended LBA Formats Supported: Not Supported 00:25:14.028 Flexible Data Placement Supported: Not Supported 00:25:14.028 00:25:14.028 Controller Memory Buffer Support 00:25:14.028 ================================ 00:25:14.028 Supported: No 00:25:14.028 00:25:14.028 Persistent Memory Region Support 00:25:14.028 ================================ 00:25:14.028 Supported: No 00:25:14.028 00:25:14.028 Admin Command Set Attributes 00:25:14.028 ============================ 00:25:14.028 Security Send/Receive: Not Supported 00:25:14.028 Format NVM: Not Supported 00:25:14.028 Firmware Activate/Download: Not Supported 00:25:14.028 Namespace Management: Not Supported 00:25:14.028 Device Self-Test: Not Supported 00:25:14.028 Directives: Not Supported 00:25:14.028 NVMe-MI: Not Supported 00:25:14.028 Virtualization Management: Not Supported 00:25:14.028 Doorbell Buffer Config: Not Supported 00:25:14.028 Get LBA Status Capability: Not Supported 00:25:14.028 Command & Feature Lockdown Capability: Not Supported 00:25:14.028 Abort Command Limit: 4 00:25:14.028 Async Event Request Limit: 4 00:25:14.028 Number of Firmware Slots: N/A 00:25:14.028 Firmware Slot 1 Read-Only: N/A 00:25:14.028 Firmware Activation Without Reset: N/A 00:25:14.028 Multiple Update Detection Support: N/A 00:25:14.028 Firmware Update Granularity: No Information Provided 00:25:14.028 Per-Namespace SMART Log: Yes 00:25:14.028 Asymmetric Namespace Access Log Page: Supported 00:25:14.028 ANA Transition Time : 10 sec 00:25:14.028 00:25:14.028 Asymmetric Namespace Access Capabilities 00:25:14.028 ANA Optimized State : Supported 00:25:14.028 ANA Non-Optimized State : Supported 00:25:14.028 ANA Inaccessible State : Supported 00:25:14.028 ANA Persistent Loss State : Supported 00:25:14.028 ANA Change State : Supported 00:25:14.028 ANAGRPID is not changed : No 00:25:14.028 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:14.028 00:25:14.028 ANA Group Identifier Maximum : 128 00:25:14.028 Number of ANA Group Identifiers : 128 00:25:14.028 Max Number of Allowed Namespaces : 1024 00:25:14.028 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:14.028 Command Effects Log Page: Supported 00:25:14.028 Get Log Page Extended Data: Supported 00:25:14.028 Telemetry Log Pages: Not Supported 00:25:14.028 Persistent Event Log Pages: Not Supported 00:25:14.028 Supported Log Pages Log Page: May Support 00:25:14.028 Commands Supported & Effects Log Page: Not Supported 00:25:14.028 Feature Identifiers & Effects Log Page:May Support 00:25:14.028 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.028 Data Area 4 for Telemetry Log: Not Supported 00:25:14.028 Error Log Page Entries Supported: 128 00:25:14.028 Keep Alive: Supported 00:25:14.028 Keep Alive Granularity: 1000 ms 00:25:14.028 00:25:14.028 NVM Command Set Attributes 00:25:14.028 ========================== 00:25:14.028 Submission Queue Entry Size 00:25:14.028 Max: 64 00:25:14.028 Min: 64 00:25:14.028 Completion Queue Entry Size 00:25:14.028 Max: 16 00:25:14.028 Min: 16 00:25:14.028 Number of Namespaces: 1024 00:25:14.028 Compare Command: Not Supported 00:25:14.028 Write Uncorrectable Command: Not Supported 00:25:14.028 Dataset Management Command: Supported 00:25:14.028 Write Zeroes Command: Supported 00:25:14.028 Set Features Save Field: Not Supported 00:25:14.028 Reservations: Not Supported 00:25:14.028 Timestamp: Not Supported 00:25:14.028 Copy: Not Supported 00:25:14.028 Volatile Write Cache: Present 00:25:14.028 Atomic Write Unit (Normal): 1 00:25:14.028 Atomic Write Unit (PFail): 1 00:25:14.028 Atomic Compare & Write Unit: 1 00:25:14.028 Fused Compare & Write: Not Supported 00:25:14.028 Scatter-Gather List 00:25:14.028 SGL Command Set: Supported 00:25:14.028 SGL Keyed: Not Supported 00:25:14.028 SGL Bit Bucket Descriptor: Not Supported 00:25:14.028 SGL Metadata Pointer: Not Supported 00:25:14.028 Oversized SGL: Not Supported 00:25:14.028 SGL Metadata Address: Not Supported 00:25:14.028 SGL Offset: Supported 00:25:14.028 Transport SGL Data Block: Not Supported 00:25:14.028 Replay Protected Memory Block: Not Supported 00:25:14.028 00:25:14.028 Firmware Slot Information 00:25:14.028 ========================= 00:25:14.028 Active slot: 0 00:25:14.028 00:25:14.028 Asymmetric Namespace Access 00:25:14.028 =========================== 00:25:14.028 Change Count : 0 00:25:14.028 Number of ANA Group Descriptors : 1 00:25:14.028 ANA Group Descriptor : 0 00:25:14.028 ANA Group ID : 1 00:25:14.028 Number of NSID Values : 1 00:25:14.028 Change Count : 0 00:25:14.028 ANA State : 1 00:25:14.028 Namespace Identifier : 1 00:25:14.028 00:25:14.028 Commands Supported and Effects 00:25:14.028 ============================== 00:25:14.028 Admin Commands 00:25:14.028 -------------- 00:25:14.028 Get Log Page (02h): Supported 00:25:14.028 Identify (06h): Supported 00:25:14.028 Abort (08h): Supported 00:25:14.028 Set Features (09h): Supported 00:25:14.028 Get Features (0Ah): Supported 00:25:14.028 Asynchronous Event Request (0Ch): Supported 00:25:14.028 Keep Alive (18h): Supported 00:25:14.028 I/O Commands 00:25:14.028 ------------ 00:25:14.028 Flush (00h): Supported 00:25:14.028 Write (01h): Supported LBA-Change 00:25:14.028 Read (02h): Supported 00:25:14.028 Write Zeroes (08h): Supported LBA-Change 00:25:14.028 Dataset Management (09h): Supported 00:25:14.028 00:25:14.028 Error Log 00:25:14.028 ========= 00:25:14.028 Entry: 0 00:25:14.028 Error Count: 0x3 00:25:14.028 Submission Queue Id: 0x0 00:25:14.028 Command Id: 0x5 00:25:14.028 Phase Bit: 0 00:25:14.028 Status Code: 0x2 00:25:14.028 Status Code Type: 0x0 00:25:14.028 Do Not Retry: 1 00:25:14.028 Error Location: 0x28 00:25:14.028 LBA: 0x0 00:25:14.028 Namespace: 0x0 00:25:14.028 Vendor Log Page: 0x0 00:25:14.028 ----------- 00:25:14.028 Entry: 1 00:25:14.028 Error Count: 0x2 00:25:14.028 Submission Queue Id: 0x0 00:25:14.028 Command Id: 0x5 00:25:14.028 Phase Bit: 0 00:25:14.028 Status Code: 0x2 00:25:14.028 Status Code Type: 0x0 00:25:14.028 Do Not Retry: 1 00:25:14.028 Error Location: 0x28 00:25:14.028 LBA: 0x0 00:25:14.028 Namespace: 0x0 00:25:14.028 Vendor Log Page: 0x0 00:25:14.028 ----------- 00:25:14.028 Entry: 2 00:25:14.028 Error Count: 0x1 00:25:14.028 Submission Queue Id: 0x0 00:25:14.028 Command Id: 0x4 00:25:14.028 Phase Bit: 0 00:25:14.028 Status Code: 0x2 00:25:14.028 Status Code Type: 0x0 00:25:14.028 Do Not Retry: 1 00:25:14.028 Error Location: 0x28 00:25:14.028 LBA: 0x0 00:25:14.028 Namespace: 0x0 00:25:14.028 Vendor Log Page: 0x0 00:25:14.029 00:25:14.029 Number of Queues 00:25:14.029 ================ 00:25:14.029 Number of I/O Submission Queues: 128 00:25:14.029 Number of I/O Completion Queues: 128 00:25:14.029 00:25:14.029 ZNS Specific Controller Data 00:25:14.029 ============================ 00:25:14.029 Zone Append Size Limit: 0 00:25:14.029 00:25:14.029 00:25:14.029 Active Namespaces 00:25:14.029 ================= 00:25:14.029 get_feature(0x05) failed 00:25:14.029 Namespace ID:1 00:25:14.029 Command Set Identifier: NVM (00h) 00:25:14.029 Deallocate: Supported 00:25:14.029 Deallocated/Unwritten Error: Not Supported 00:25:14.029 Deallocated Read Value: Unknown 00:25:14.029 Deallocate in Write Zeroes: Not Supported 00:25:14.029 Deallocated Guard Field: 0xFFFF 00:25:14.029 Flush: Supported 00:25:14.029 Reservation: Not Supported 00:25:14.029 Namespace Sharing Capabilities: Multiple Controllers 00:25:14.029 Size (in LBAs): 1953525168 (931GiB) 00:25:14.029 Capacity (in LBAs): 1953525168 (931GiB) 00:25:14.029 Utilization (in LBAs): 1953525168 (931GiB) 00:25:14.029 UUID: 764b3311-5ba2-45cd-8f3c-912a6d42f45e 00:25:14.029 Thin Provisioning: Not Supported 00:25:14.029 Per-NS Atomic Units: Yes 00:25:14.029 Atomic Boundary Size (Normal): 0 00:25:14.029 Atomic Boundary Size (PFail): 0 00:25:14.029 Atomic Boundary Offset: 0 00:25:14.029 NGUID/EUI64 Never Reused: No 00:25:14.029 ANA group ID: 1 00:25:14.029 Namespace Write Protected: No 00:25:14.029 Number of LBA Formats: 1 00:25:14.029 Current LBA Format: LBA Format #00 00:25:14.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:14.029 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:14.029 rmmod nvme_tcp 00:25:14.029 rmmod nvme_fabrics 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.029 15:06:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:16.564 15:06:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:16.564 15:06:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:16.564 15:06:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:16.564 15:06:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:25:16.564 15:06:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:25:19.096 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:19.096 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:20.034 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:20.034 00:25:20.034 real 0m16.785s 00:25:20.034 user 0m4.255s 00:25:20.034 sys 0m8.817s 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.034 ************************************ 00:25:20.034 END TEST nvmf_identify_kernel_target 00:25:20.034 ************************************ 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.034 ************************************ 00:25:20.034 START TEST nvmf_auth_host 00:25:20.034 ************************************ 00:25:20.034 15:06:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:20.293 * Looking for test storage... 00:25:20.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:20.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.293 --rc genhtml_branch_coverage=1 00:25:20.293 --rc genhtml_function_coverage=1 00:25:20.293 --rc genhtml_legend=1 00:25:20.293 --rc geninfo_all_blocks=1 00:25:20.293 --rc geninfo_unexecuted_blocks=1 00:25:20.293 00:25:20.293 ' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:20.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.293 --rc genhtml_branch_coverage=1 00:25:20.293 --rc genhtml_function_coverage=1 00:25:20.293 --rc genhtml_legend=1 00:25:20.293 --rc geninfo_all_blocks=1 00:25:20.293 --rc geninfo_unexecuted_blocks=1 00:25:20.293 00:25:20.293 ' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:20.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.293 --rc genhtml_branch_coverage=1 00:25:20.293 --rc genhtml_function_coverage=1 00:25:20.293 --rc genhtml_legend=1 00:25:20.293 --rc geninfo_all_blocks=1 00:25:20.293 --rc geninfo_unexecuted_blocks=1 00:25:20.293 00:25:20.293 ' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:20.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.293 --rc genhtml_branch_coverage=1 00:25:20.293 --rc genhtml_function_coverage=1 00:25:20.293 --rc genhtml_legend=1 00:25:20.293 --rc geninfo_all_blocks=1 00:25:20.293 --rc geninfo_unexecuted_blocks=1 00:25:20.293 00:25:20.293 ' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.293 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:20.294 15:06:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:26.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:26.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.868 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:26.869 Found net devices under 0000:86:00.0: cvl_0_0 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:26.869 Found net devices under 0000:86:00.1: cvl_0_1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.869 15:06:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:26.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:25:26.869 00:25:26.869 --- 10.0.0.2 ping statistics --- 00:25:26.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.869 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:25:26.869 00:25:26.869 --- 10.0.0.1 ping statistics --- 00:25:26.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.869 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3243088 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3243088 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3243088 ']' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c708c4a92820ed90175c449c6aed09bf 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HhM 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c708c4a92820ed90175c449c6aed09bf 0 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c708c4a92820ed90175c449c6aed09bf 0 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c708c4a92820ed90175c449c6aed09bf 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HhM 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HhM 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.HhM 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc2673ed46cf4ce2a8932d58f0bd16df212bb7ba066d7fa767915076608b0dff 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.aXF 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc2673ed46cf4ce2a8932d58f0bd16df212bb7ba066d7fa767915076608b0dff 3 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc2673ed46cf4ce2a8932d58f0bd16df212bb7ba066d7fa767915076608b0dff 3 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc2673ed46cf4ce2a8932d58f0bd16df212bb7ba066d7fa767915076608b0dff 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.aXF 00:25:26.869 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.aXF 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.aXF 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=885ee8f70407b6b0966487eb783ca049f0513b28e30f28cb 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.7DS 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 885ee8f70407b6b0966487eb783ca049f0513b28e30f28cb 0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 885ee8f70407b6b0966487eb783ca049f0513b28e30f28cb 0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=885ee8f70407b6b0966487eb783ca049f0513b28e30f28cb 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.7DS 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.7DS 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.7DS 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0a06b566ea3c28c229f3a4b1dc09019f02fe0358fbd16845 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7vx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0a06b566ea3c28c229f3a4b1dc09019f02fe0358fbd16845 2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0a06b566ea3c28c229f3a4b1dc09019f02fe0358fbd16845 2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0a06b566ea3c28c229f3a4b1dc09019f02fe0358fbd16845 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7vx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7vx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.7vx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b809579e5ca902b31212c31deda5ecda 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lsx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b809579e5ca902b31212c31deda5ecda 1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b809579e5ca902b31212c31deda5ecda 1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b809579e5ca902b31212c31deda5ecda 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lsx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lsx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.lsx 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8997f9ae895b6dbd37f175e55494242b 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hcN 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8997f9ae895b6dbd37f175e55494242b 1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8997f9ae895b6dbd37f175e55494242b 1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8997f9ae895b6dbd37f175e55494242b 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hcN 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hcN 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.hcN 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=103e67b3d9d9f050a19b18ee956b9acd94d58850d9479b3b 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Ejy 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 103e67b3d9d9f050a19b18ee956b9acd94d58850d9479b3b 2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 103e67b3d9d9f050a19b18ee956b9acd94d58850d9479b3b 2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=103e67b3d9d9f050a19b18ee956b9acd94d58850d9479b3b 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Ejy 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Ejy 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Ejy 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7cdaf4fe12122fea325731ece982d360 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uT0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7cdaf4fe12122fea325731ece982d360 0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7cdaf4fe12122fea325731ece982d360 0 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7cdaf4fe12122fea325731ece982d360 00:25:26.870 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uT0 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uT0 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.uT0 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7a54858a19a75bd96fe63cde10b8157ce845daba9ef222c10c2c3e3f428bdd13 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Txr 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7a54858a19a75bd96fe63cde10b8157ce845daba9ef222c10c2c3e3f428bdd13 3 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7a54858a19a75bd96fe63cde10b8157ce845daba9ef222c10c2c3e3f428bdd13 3 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7a54858a19a75bd96fe63cde10b8157ce845daba9ef222c10c2c3e3f428bdd13 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:25:26.871 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Txr 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Txr 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Txr 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3243088 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3243088 ']' 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.176 15:06:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.HhM 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.aXF ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.aXF 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.7DS 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.7vx ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.7vx 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.lsx 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.hcN ]] 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hcN 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.176 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Ejy 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.uT0 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.uT0 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Txr 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:27.469 15:06:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:25:30.017 Waiting for block devices as requested 00:25:30.017 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:30.275 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:30.275 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:30.275 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:30.275 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:30.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:30.534 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:30.534 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:30.534 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:30.792 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:30.792 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:30.792 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:30.792 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:31.050 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:31.050 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:31.050 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:31.308 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:25:31.875 No valid GPT data, bailing 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:31.875 00:25:31.875 Discovery Log Number of Records 2, Generation counter 2 00:25:31.875 =====Discovery Log Entry 0====== 00:25:31.875 trtype: tcp 00:25:31.875 adrfam: ipv4 00:25:31.875 subtype: current discovery subsystem 00:25:31.875 treq: not specified, sq flow control disable supported 00:25:31.875 portid: 1 00:25:31.875 trsvcid: 4420 00:25:31.875 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:31.875 traddr: 10.0.0.1 00:25:31.875 eflags: none 00:25:31.875 sectype: none 00:25:31.875 =====Discovery Log Entry 1====== 00:25:31.875 trtype: tcp 00:25:31.875 adrfam: ipv4 00:25:31.875 subtype: nvme subsystem 00:25:31.875 treq: not specified, sq flow control disable supported 00:25:31.875 portid: 1 00:25:31.875 trsvcid: 4420 00:25:31.875 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:31.875 traddr: 10.0.0.1 00:25:31.875 eflags: none 00:25:31.875 sectype: none 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.875 15:06:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 nvme0n1 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.141 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 nvme0n1 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.399 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.400 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.658 nvme0n1 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.658 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.659 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 nvme0n1 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.917 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.918 nvme0n1 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.918 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.177 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.177 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.177 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.177 15:06:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.177 nvme0n1 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.177 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.435 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.436 nvme0n1 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.436 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:33.694 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.695 nvme0n1 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.695 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 nvme0n1 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:33.954 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:34.213 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:34.213 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.213 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.213 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.213 15:06:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 nvme0n1 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.213 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:34.214 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 nvme0n1 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.473 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.732 nvme0n1 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.732 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.991 15:06:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 nvme0n1 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.250 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.251 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 nvme0n1 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.510 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 nvme0n1 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.769 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.027 15:06:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.027 nvme0n1 00:25:36.027 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.027 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.027 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.027 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.028 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.286 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.545 nvme0n1 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.545 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.804 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.063 nvme0n1 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.063 15:06:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.063 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 nvme0n1 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.631 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.890 nvme0n1 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.890 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.148 15:06:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 nvme0n1 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.407 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.973 nvme0n1 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.973 15:06:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.973 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.973 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.973 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.973 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.232 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 nvme0n1 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.799 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.800 15:06:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 nvme0n1 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.367 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.934 nvme0n1 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.934 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.192 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.193 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.193 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.193 15:06:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.193 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.760 nvme0n1 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.760 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.019 nvme0n1 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.019 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.020 15:06:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.020 nvme0n1 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.020 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.278 nvme0n1 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.278 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.535 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.536 nvme0n1 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.536 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.793 nvme0n1 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.793 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.794 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.052 nvme0n1 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.052 15:06:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.052 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.053 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.311 nvme0n1 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:43.311 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.312 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.570 nvme0n1 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.570 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.571 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.830 nvme0n1 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.830 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.089 nvme0n1 00:25:44.089 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.089 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.089 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.089 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.089 15:06:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.089 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 nvme0n1 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.347 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.348 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.606 nvme0n1 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.606 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:44.865 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.866 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.124 nvme0n1 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.124 15:06:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.124 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.125 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.383 nvme0n1 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:45.383 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.384 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.642 nvme0n1 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.642 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.901 15:06:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 nvme0n1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.160 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.727 nvme0n1 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.727 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.728 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 nvme0n1 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.987 15:06:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.987 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.553 nvme0n1 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.553 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.117 nvme0n1 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.117 15:06:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.682 nvme0n1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.682 15:06:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.247 nvme0n1 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:49.247 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.248 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.813 nvme0n1 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.813 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:50.070 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.071 15:06:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.636 nvme0n1 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.636 15:06:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.202 nvme0n1 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.202 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.460 nvme0n1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.460 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.718 nvme0n1 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.718 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.976 nvme0n1 00:25:51.976 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.976 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.976 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.976 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.976 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.977 15:06:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.977 nvme0n1 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 nvme0n1 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.234 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:52.491 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.492 nvme0n1 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.492 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 nvme0n1 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.750 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.008 15:06:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.008 nvme0n1 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.008 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.266 nvme0n1 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.266 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.524 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.525 nvme0n1 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.525 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.782 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.783 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.040 nvme0n1 00:25:54.040 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.041 15:06:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.299 nvme0n1 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.299 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.558 nvme0n1 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.558 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.816 nvme0n1 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.816 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.074 15:06:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.332 nvme0n1 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.332 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.589 nvme0n1 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.589 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.847 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.848 15:06:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.105 nvme0n1 00:25:56.105 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.106 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.671 nvme0n1 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:56.671 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.672 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.929 nvme0n1 00:25:56.929 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:56.929 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.929 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.929 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:56.929 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.187 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.187 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.187 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.187 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.187 15:06:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.187 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.445 nvme0n1 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcwOGM0YTkyODIwZWQ5MDE3NWM0NDljNmFlZDA5YmYWcoEC: 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmMyNjczZWQ0NmNmNGNlMmE4OTMyZDU4ZjBiZDE2ZGYyMTJiYjdiYTA2NmQ3ZmE3Njc5MTUwNzY2MDhiMGRmZqGbpFQ=: 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:57.445 15:06:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.377 nvme0n1 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:25:58.377 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.378 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.942 nvme0n1 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.942 15:06:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.508 nvme0n1 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTAzZTY3YjNkOWQ5ZjA1MGExOWIxOGVlOTU2YjlhY2Q5NGQ1ODg1MGQ5NDc5YjNi9vKHGQ==: 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:N2NkYWY0ZmUxMjEyMmZlYTMyNTczMWVjZTk4MmQzNjB+787h: 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.508 15:06:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 nvme0n1 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2E1NDg1OGExOWE3NWJkOTZmZTYzY2RlMTBiODE1N2NlODQ1ZGFiYTllZjIyMmMxMGMyYzNlM2Y0MjhiZGQxM4DM+4A=: 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:00.073 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.074 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.331 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.331 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.331 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:00.331 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.331 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.897 nvme0n1 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:00.897 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.898 request: 00:26:00.898 { 00:26:00.898 "name": "nvme0", 00:26:00.898 "trtype": "tcp", 00:26:00.898 "traddr": "10.0.0.1", 00:26:00.898 "adrfam": "ipv4", 00:26:00.898 "trsvcid": "4420", 00:26:00.898 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:00.898 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:00.898 "prchk_reftag": false, 00:26:00.898 "prchk_guard": false, 00:26:00.898 "hdgst": false, 00:26:00.898 "ddgst": false, 00:26:00.898 "allow_unrecognized_csi": false, 00:26:00.898 "method": "bdev_nvme_attach_controller", 00:26:00.898 "req_id": 1 00:26:00.898 } 00:26:00.898 Got JSON-RPC error response 00:26:00.898 response: 00:26:00.898 { 00:26:00.898 "code": -5, 00:26:00.898 "message": "Input/output error" 00:26:00.898 } 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.898 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.156 request: 00:26:01.156 { 00:26:01.156 "name": "nvme0", 00:26:01.156 "trtype": "tcp", 00:26:01.156 "traddr": "10.0.0.1", 00:26:01.156 "adrfam": "ipv4", 00:26:01.156 "trsvcid": "4420", 00:26:01.156 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.156 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.156 "prchk_reftag": false, 00:26:01.156 "prchk_guard": false, 00:26:01.156 "hdgst": false, 00:26:01.156 "ddgst": false, 00:26:01.156 "dhchap_key": "key2", 00:26:01.156 "allow_unrecognized_csi": false, 00:26:01.156 "method": "bdev_nvme_attach_controller", 00:26:01.156 "req_id": 1 00:26:01.156 } 00:26:01.156 Got JSON-RPC error response 00:26:01.156 response: 00:26:01.156 { 00:26:01.156 "code": -5, 00:26:01.156 "message": "Input/output error" 00:26:01.156 } 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.156 15:06:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.156 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.156 request: 00:26:01.156 { 00:26:01.156 "name": "nvme0", 00:26:01.156 "trtype": "tcp", 00:26:01.156 "traddr": "10.0.0.1", 00:26:01.156 "adrfam": "ipv4", 00:26:01.156 "trsvcid": "4420", 00:26:01.156 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:01.156 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:01.156 "prchk_reftag": false, 00:26:01.157 "prchk_guard": false, 00:26:01.157 "hdgst": false, 00:26:01.157 "ddgst": false, 00:26:01.157 "dhchap_key": "key1", 00:26:01.157 "dhchap_ctrlr_key": "ckey2", 00:26:01.157 "allow_unrecognized_csi": false, 00:26:01.157 "method": "bdev_nvme_attach_controller", 00:26:01.157 "req_id": 1 00:26:01.157 } 00:26:01.157 Got JSON-RPC error response 00:26:01.157 response: 00:26:01.157 { 00:26:01.157 "code": -5, 00:26:01.157 "message": "Input/output error" 00:26:01.157 } 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.157 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.415 nvme0n1 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.415 request: 00:26:01.415 { 00:26:01.415 "name": "nvme0", 00:26:01.415 "dhchap_key": "key1", 00:26:01.415 "dhchap_ctrlr_key": "ckey2", 00:26:01.415 "method": "bdev_nvme_set_keys", 00:26:01.415 "req_id": 1 00:26:01.415 } 00:26:01.415 Got JSON-RPC error response 00:26:01.415 response: 00:26:01.415 { 00:26:01.415 "code": -13, 00:26:01.415 "message": "Permission denied" 00:26:01.415 } 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:01.415 15:06:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:02.785 15:06:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:03.716 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODg1ZWU4ZjcwNDA3YjZiMDk2NjQ4N2ViNzgzY2EwNDlmMDUxM2IyOGUzMGYyOGNiHMnlew==: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGEwNmI1NjZlYTNjMjhjMjI5ZjNhNGIxZGMwOTAxOWYwMmZlMDM1OGZiZDE2ODQ15djrUQ==: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.717 nvme0n1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjgwOTU3OWU1Y2E5MDJiMzEyMTJjMzFkZWRhNWVjZGHSYUkO: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: ]] 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODk5N2Y5YWU4OTViNmRiZDM3ZjE3NWU1NTQ5NDI0MmIs8q4x: 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.717 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.975 request: 00:26:03.975 { 00:26:03.975 "name": "nvme0", 00:26:03.975 "dhchap_key": "key2", 00:26:03.975 "dhchap_ctrlr_key": "ckey1", 00:26:03.975 "method": "bdev_nvme_set_keys", 00:26:03.975 "req_id": 1 00:26:03.975 } 00:26:03.975 Got JSON-RPC error response 00:26:03.975 response: 00:26:03.975 { 00:26:03.975 "code": -13, 00:26:03.975 "message": "Permission denied" 00:26:03.975 } 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:03.975 15:06:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:04.910 rmmod nvme_tcp 00:26:04.910 rmmod nvme_fabrics 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3243088 ']' 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3243088 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3243088 ']' 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3243088 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.910 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243088 00:26:05.218 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.218 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.218 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243088' 00:26:05.218 killing process with pid 3243088 00:26:05.218 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3243088 00:26:05.218 15:06:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3243088 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.218 15:06:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.172 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:07.431 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:07.431 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:07.431 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:07.431 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:07.431 15:07:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:26:10.723 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:10.723 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:11.292 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:11.292 15:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.HhM /tmp/spdk.key-null.7DS /tmp/spdk.key-sha256.lsx /tmp/spdk.key-sha384.Ejy /tmp/spdk.key-sha512.Txr /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log 00:26:11.292 15:07:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:26:13.837 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:13.837 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:13.837 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:14.097 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:14.097 00:26:14.097 real 0m54.052s 00:26:14.097 user 0m48.712s 00:26:14.097 sys 0m12.640s 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.097 ************************************ 00:26:14.097 END TEST nvmf_auth_host 00:26:14.097 ************************************ 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.097 ************************************ 00:26:14.097 START TEST nvmf_digest 00:26:14.097 ************************************ 00:26:14.097 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:14.356 * Looking for test storage... 00:26:14.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.356 --rc genhtml_branch_coverage=1 00:26:14.356 --rc genhtml_function_coverage=1 00:26:14.356 --rc genhtml_legend=1 00:26:14.356 --rc geninfo_all_blocks=1 00:26:14.356 --rc geninfo_unexecuted_blocks=1 00:26:14.356 00:26:14.356 ' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.356 --rc genhtml_branch_coverage=1 00:26:14.356 --rc genhtml_function_coverage=1 00:26:14.356 --rc genhtml_legend=1 00:26:14.356 --rc geninfo_all_blocks=1 00:26:14.356 --rc geninfo_unexecuted_blocks=1 00:26:14.356 00:26:14.356 ' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.356 --rc genhtml_branch_coverage=1 00:26:14.356 --rc genhtml_function_coverage=1 00:26:14.356 --rc genhtml_legend=1 00:26:14.356 --rc geninfo_all_blocks=1 00:26:14.356 --rc geninfo_unexecuted_blocks=1 00:26:14.356 00:26:14.356 ' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:14.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.356 --rc genhtml_branch_coverage=1 00:26:14.356 --rc genhtml_function_coverage=1 00:26:14.356 --rc genhtml_legend=1 00:26:14.356 --rc geninfo_all_blocks=1 00:26:14.356 --rc geninfo_unexecuted_blocks=1 00:26:14.356 00:26:14.356 ' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.356 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.357 15:07:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:20.927 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:20.927 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:20.927 Found net devices under 0000:86:00.0: cvl_0_0 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:20.927 Found net devices under 0000:86:00.1: cvl_0_1 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:20.927 15:07:12 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:20.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:26:20.927 00:26:20.927 --- 10.0.0.2 ping statistics --- 00:26:20.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.927 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:20.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:26:20.927 00:26:20.927 --- 10.0.0.1 ping statistics --- 00:26:20.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.927 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.927 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 ************************************ 00:26:20.928 START TEST nvmf_digest_clean 00:26:20.928 ************************************ 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3256899 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3256899 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3256899 ']' 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 [2024-12-11 15:07:13.378420] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:20.928 [2024-12-11 15:07:13.378466] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.928 [2024-12-11 15:07:13.456884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.928 [2024-12-11 15:07:13.495753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.928 [2024-12-11 15:07:13.495790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.928 [2024-12-11 15:07:13.495800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.928 [2024-12-11 15:07:13.495808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.928 [2024-12-11 15:07:13.495814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.928 [2024-12-11 15:07:13.496444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 null0 00:26:20.928 [2024-12-11 15:07:13.665691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.928 [2024-12-11 15:07:13.689907] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3256924 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3256924 /var/tmp/bperf.sock 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3256924 ']' 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.928 [2024-12-11 15:07:13.745084] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:20.928 [2024-12-11 15:07:13.745124] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256924 ] 00:26:20.928 [2024-12-11 15:07:13.820085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.928 [2024-12-11 15:07:13.859778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:20.928 15:07:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.186 15:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.186 15:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.444 nvme0n1 00:26:21.444 15:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.444 15:07:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.704 Running I/O for 2 seconds... 00:26:23.570 24785.00 IOPS, 96.82 MiB/s [2024-12-11T14:07:16.618Z] 24822.50 IOPS, 96.96 MiB/s 00:26:23.570 Latency(us) 00:26:23.570 [2024-12-11T14:07:16.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.570 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:23.570 nvme0n1 : 2.00 24845.53 97.05 0.00 0.00 5147.18 2592.95 13335.15 00:26:23.570 [2024-12-11T14:07:16.618Z] =================================================================================================================== 00:26:23.570 [2024-12-11T14:07:16.618Z] Total : 24845.53 97.05 0.00 0.00 5147.18 2592.95 13335.15 00:26:23.570 { 00:26:23.570 "results": [ 00:26:23.570 { 00:26:23.570 "job": "nvme0n1", 00:26:23.570 "core_mask": "0x2", 00:26:23.570 "workload": "randread", 00:26:23.570 "status": "finished", 00:26:23.570 "queue_depth": 128, 00:26:23.570 "io_size": 4096, 00:26:23.570 "runtime": 2.003298, 00:26:23.570 "iops": 24845.529721489263, 00:26:23.570 "mibps": 97.05285047456744, 00:26:23.570 "io_failed": 0, 00:26:23.570 "io_timeout": 0, 00:26:23.570 "avg_latency_us": 5147.180256521127, 00:26:23.570 "min_latency_us": 2592.946086956522, 00:26:23.570 "max_latency_us": 13335.151304347826 00:26:23.570 } 00:26:23.570 ], 00:26:23.570 "core_count": 1 00:26:23.570 } 00:26:23.570 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:23.570 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:23.570 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:23.570 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:23.570 | select(.opcode=="crc32c") 00:26:23.570 | "\(.module_name) \(.executed)"' 00:26:23.570 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3256924 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3256924 ']' 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3256924 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3256924 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3256924' 00:26:23.829 killing process with pid 3256924 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3256924 00:26:23.829 Received shutdown signal, test time was about 2.000000 seconds 00:26:23.829 00:26:23.829 Latency(us) 00:26:23.829 [2024-12-11T14:07:16.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:23.829 [2024-12-11T14:07:16.877Z] =================================================================================================================== 00:26:23.829 [2024-12-11T14:07:16.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:23.829 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3256924 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3257398 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3257398 /var/tmp/bperf.sock 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3257398 ']' 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.088 15:07:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.088 [2024-12-11 15:07:16.980016] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:24.088 [2024-12-11 15:07:16.980063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3257398 ] 00:26:24.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.088 Zero copy mechanism will not be used. 00:26:24.088 [2024-12-11 15:07:17.037749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.088 [2024-12-11 15:07:17.080776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.347 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.347 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:24.347 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:24.347 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:24.347 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.605 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.605 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.864 nvme0n1 00:26:24.864 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:24.864 15:07:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:24.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.864 Zero copy mechanism will not be used. 00:26:24.864 Running I/O for 2 seconds... 00:26:27.174 5868.00 IOPS, 733.50 MiB/s [2024-12-11T14:07:20.222Z] 5876.50 IOPS, 734.56 MiB/s 00:26:27.174 Latency(us) 00:26:27.174 [2024-12-11T14:07:20.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.174 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:27.174 nvme0n1 : 2.00 5875.77 734.47 0.00 0.00 2720.24 662.48 11397.57 00:26:27.174 [2024-12-11T14:07:20.222Z] =================================================================================================================== 00:26:27.174 [2024-12-11T14:07:20.222Z] Total : 5875.77 734.47 0.00 0.00 2720.24 662.48 11397.57 00:26:27.174 { 00:26:27.174 "results": [ 00:26:27.174 { 00:26:27.174 "job": "nvme0n1", 00:26:27.174 "core_mask": "0x2", 00:26:27.174 "workload": "randread", 00:26:27.174 "status": "finished", 00:26:27.174 "queue_depth": 16, 00:26:27.174 "io_size": 131072, 00:26:27.174 "runtime": 2.002971, 00:26:27.174 "iops": 5875.771541375287, 00:26:27.174 "mibps": 734.4714426719108, 00:26:27.175 "io_failed": 0, 00:26:27.175 "io_timeout": 0, 00:26:27.175 "avg_latency_us": 2720.2444411441998, 00:26:27.175 "min_latency_us": 662.4834782608696, 00:26:27.175 "max_latency_us": 11397.565217391304 00:26:27.175 } 00:26:27.175 ], 00:26:27.175 "core_count": 1 00:26:27.175 } 00:26:27.175 15:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:27.175 15:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:27.175 15:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.175 15:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.175 | select(.opcode=="crc32c") 00:26:27.175 | "\(.module_name) \(.executed)"' 00:26:27.175 15:07:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3257398 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3257398 ']' 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3257398 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3257398 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3257398' 00:26:27.175 killing process with pid 3257398 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3257398 00:26:27.175 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.175 00:26:27.175 Latency(us) 00:26:27.175 [2024-12-11T14:07:20.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.175 [2024-12-11T14:07:20.223Z] =================================================================================================================== 00:26:27.175 [2024-12-11T14:07:20.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.175 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3257398 00:26:27.433 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:27.433 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:27.433 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:27.433 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:27.433 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3258068 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3258068 /var/tmp/bperf.sock 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3258068 ']' 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:27.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.434 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:27.434 [2024-12-11 15:07:20.413431] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:27.434 [2024-12-11 15:07:20.413482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258068 ] 00:26:27.692 [2024-12-11 15:07:20.488577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.692 [2024-12-11 15:07:20.529769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.692 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.692 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:27.692 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:27.692 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:27.692 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:27.950 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.950 15:07:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.208 nvme0n1 00:26:28.208 15:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:28.208 15:07:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:28.208 Running I/O for 2 seconds... 00:26:30.515 28029.00 IOPS, 109.49 MiB/s [2024-12-11T14:07:23.563Z] 28105.50 IOPS, 109.79 MiB/s 00:26:30.515 Latency(us) 00:26:30.515 [2024-12-11T14:07:23.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.515 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:30.515 nvme0n1 : 2.00 28116.16 109.83 0.00 0.00 4546.69 1837.86 10599.74 00:26:30.515 [2024-12-11T14:07:23.563Z] =================================================================================================================== 00:26:30.515 [2024-12-11T14:07:23.563Z] Total : 28116.16 109.83 0.00 0.00 4546.69 1837.86 10599.74 00:26:30.515 { 00:26:30.515 "results": [ 00:26:30.515 { 00:26:30.515 "job": "nvme0n1", 00:26:30.515 "core_mask": "0x2", 00:26:30.515 "workload": "randwrite", 00:26:30.515 "status": "finished", 00:26:30.515 "queue_depth": 128, 00:26:30.515 "io_size": 4096, 00:26:30.515 "runtime": 2.004826, 00:26:30.515 "iops": 28116.15571625667, 00:26:30.515 "mibps": 109.82873326662762, 00:26:30.515 "io_failed": 0, 00:26:30.515 "io_timeout": 0, 00:26:30.515 "avg_latency_us": 4546.685742635353, 00:26:30.515 "min_latency_us": 1837.8573913043479, 00:26:30.515 "max_latency_us": 10599.735652173913 00:26:30.515 } 00:26:30.515 ], 00:26:30.515 "core_count": 1 00:26:30.515 } 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:30.515 | select(.opcode=="crc32c") 00:26:30.515 | "\(.module_name) \(.executed)"' 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3258068 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3258068 ']' 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3258068 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3258068 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3258068' 00:26:30.515 killing process with pid 3258068 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3258068 00:26:30.515 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.515 00:26:30.515 Latency(us) 00:26:30.515 [2024-12-11T14:07:23.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.515 [2024-12-11T14:07:23.563Z] =================================================================================================================== 00:26:30.515 [2024-12-11T14:07:23.563Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.515 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3258068 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3258555 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3258555 /var/tmp/bperf.sock 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3258555 ']' 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:30.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.774 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.774 [2024-12-11 15:07:23.672307] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:30.774 [2024-12-11 15:07:23.672356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258555 ] 00:26:30.774 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:30.774 Zero copy mechanism will not be used. 00:26:30.774 [2024-12-11 15:07:23.748940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.774 [2024-12-11 15:07:23.788957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.032 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.032 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:31.032 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:31.032 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:31.032 15:07:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:31.289 15:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.289 15:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:31.547 nvme0n1 00:26:31.547 15:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:31.547 15:07:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:31.547 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.547 Zero copy mechanism will not be used. 00:26:31.547 Running I/O for 2 seconds... 00:26:33.866 6661.00 IOPS, 832.62 MiB/s [2024-12-11T14:07:26.914Z] 6488.00 IOPS, 811.00 MiB/s 00:26:33.866 Latency(us) 00:26:33.866 [2024-12-11T14:07:26.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.866 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:33.866 nvme0n1 : 2.00 6485.67 810.71 0.00 0.00 2462.65 1880.60 12195.39 00:26:33.866 [2024-12-11T14:07:26.914Z] =================================================================================================================== 00:26:33.866 [2024-12-11T14:07:26.914Z] Total : 6485.67 810.71 0.00 0.00 2462.65 1880.60 12195.39 00:26:33.866 { 00:26:33.866 "results": [ 00:26:33.866 { 00:26:33.866 "job": "nvme0n1", 00:26:33.866 "core_mask": "0x2", 00:26:33.866 "workload": "randwrite", 00:26:33.866 "status": "finished", 00:26:33.866 "queue_depth": 16, 00:26:33.866 "io_size": 131072, 00:26:33.866 "runtime": 2.003647, 00:26:33.866 "iops": 6485.673374601414, 00:26:33.866 "mibps": 810.7091718251768, 00:26:33.866 "io_failed": 0, 00:26:33.866 "io_timeout": 0, 00:26:33.866 "avg_latency_us": 2462.6453631329778, 00:26:33.866 "min_latency_us": 1880.5982608695651, 00:26:33.866 "max_latency_us": 12195.394782608695 00:26:33.866 } 00:26:33.866 ], 00:26:33.866 "core_count": 1 00:26:33.866 } 00:26:33.866 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:33.866 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:33.866 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:33.866 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:33.867 | select(.opcode=="crc32c") 00:26:33.867 | "\(.module_name) \(.executed)"' 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3258555 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3258555 ']' 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3258555 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3258555 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3258555' 00:26:33.867 killing process with pid 3258555 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3258555 00:26:33.867 Received shutdown signal, test time was about 2.000000 seconds 00:26:33.867 00:26:33.867 Latency(us) 00:26:33.867 [2024-12-11T14:07:26.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:33.867 [2024-12-11T14:07:26.915Z] =================================================================================================================== 00:26:33.867 [2024-12-11T14:07:26.915Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:33.867 15:07:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3258555 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3256899 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3256899 ']' 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3256899 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3256899 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3256899' 00:26:34.125 killing process with pid 3256899 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3256899 00:26:34.125 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3256899 00:26:34.383 00:26:34.383 real 0m13.916s 00:26:34.383 user 0m26.703s 00:26:34.383 sys 0m4.518s 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:34.383 ************************************ 00:26:34.383 END TEST nvmf_digest_clean 00:26:34.383 ************************************ 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.383 ************************************ 00:26:34.383 START TEST nvmf_digest_error 00:26:34.383 ************************************ 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3259129 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3259129 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3259129 ']' 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.383 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.383 [2024-12-11 15:07:27.361504] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:34.383 [2024-12-11 15:07:27.361551] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.642 [2024-12-11 15:07:27.439076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.642 [2024-12-11 15:07:27.479991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:34.642 [2024-12-11 15:07:27.480026] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:34.642 [2024-12-11 15:07:27.480033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:34.642 [2024-12-11 15:07:27.480039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:34.642 [2024-12-11 15:07:27.480045] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:34.642 [2024-12-11 15:07:27.480604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.642 [2024-12-11 15:07:27.557080] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.642 null0 00:26:34.642 [2024-12-11 15:07:27.653062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:34.642 [2024-12-11 15:07:27.677278] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3259296 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3259296 /var/tmp/bperf.sock 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3259296 ']' 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:34.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.642 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.906 [2024-12-11 15:07:27.727724] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:34.906 [2024-12-11 15:07:27.727763] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259296 ] 00:26:34.906 [2024-12-11 15:07:27.784744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.906 [2024-12-11 15:07:27.824657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.906 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:34.906 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:34.906 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:34.906 15:07:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.164 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.730 nvme0n1 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.730 15:07:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:35.730 Running I/O for 2 seconds... 00:26:35.730 [2024-12-11 15:07:28.670217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.670250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.670265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.681824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.681850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.681859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.692858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.692881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.692889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.701316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.701339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.701348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.713508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.713531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.713539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.726388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.726410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.726418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.734653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.734674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.734682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.745514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.745536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.745545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.757142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.757169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.757178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.730 [2024-12-11 15:07:28.766250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.730 [2024-12-11 15:07:28.766275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.730 [2024-12-11 15:07:28.766283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.776628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.776649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.785679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.785700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.785708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.797095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.797116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.797124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.808610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.808631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.808639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.817297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.817327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.817336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.826849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.826870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.826880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.838695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.838716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.838724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.848801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.848821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.848829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.857213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.857234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.857241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.869230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.869251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.869258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.881617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.881637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.881645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.894235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.894255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.894263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.903993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.904014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.904022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.912675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.912695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.912703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.922722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.922742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.922751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.932004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.932026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.932034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.942547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.942568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.942581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.953875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.953898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.953907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.963580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.963601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.963609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.972879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.972899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.972907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.981456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.981476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.989 [2024-12-11 15:07:28.981484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.989 [2024-12-11 15:07:28.992588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.989 [2024-12-11 15:07:28.992609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.990 [2024-12-11 15:07:28.992618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.990 [2024-12-11 15:07:29.003098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.990 [2024-12-11 15:07:29.003118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.990 [2024-12-11 15:07:29.003126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.990 [2024-12-11 15:07:29.011444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.990 [2024-12-11 15:07:29.011463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.990 [2024-12-11 15:07:29.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:35.990 [2024-12-11 15:07:29.023973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:35.990 [2024-12-11 15:07:29.023993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:35.990 [2024-12-11 15:07:29.024002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.035399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.035427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.044325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.044345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.055447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.055468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.055476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.066722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.066742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.066750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.076549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.076570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.076578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.086096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.086116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.086125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.095931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.095952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.095960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.105275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.105295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.105302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.114585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.114605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.114617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.123428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.123449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.123457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.133592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.133612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.133620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.144808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.144829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.144837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.152905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.152925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.152934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.163223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.248 [2024-12-11 15:07:29.163244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.248 [2024-12-11 15:07:29.163252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.248 [2024-12-11 15:07:29.175833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.175853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.175861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.189132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.189153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.189167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.201239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.201259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.201267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.209253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.209277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.220474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.220494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.220502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.232426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.232454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.232462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.241918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.241938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.241946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.251364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.251383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.251391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.260145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.260182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.271394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.271414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.271422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.279689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.279709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.279717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.249 [2024-12-11 15:07:29.291691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.249 [2024-12-11 15:07:29.291711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.249 [2024-12-11 15:07:29.291720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.507 [2024-12-11 15:07:29.303199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.507 [2024-12-11 15:07:29.303219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.507 [2024-12-11 15:07:29.303226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.507 [2024-12-11 15:07:29.311228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.311247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.311255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.323775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.323795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.323803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.335136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.335156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.335169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.344174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.344194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.344202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.356661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.356680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.356688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.367441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.367461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.367469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.375927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.375947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.375955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.386099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.386119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.386130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.397571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.397592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.397600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.405413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.405433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.405440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.416359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.416378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.416386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.426235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.426255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.426263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.436103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.436123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.436131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.445038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.445059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.445068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.454805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.454825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.454833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.464139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.464163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.464172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.473200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.473223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.473231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.482860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.482880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.482887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.492584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.492604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.492612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.502101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.502120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.502128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.514189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.514209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.514217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.523655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.523675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.523683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.534985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.535006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.535015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.508 [2024-12-11 15:07:29.546626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.508 [2024-12-11 15:07:29.546646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.508 [2024-12-11 15:07:29.546654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.554941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.554961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.554969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.565278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.565298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.565306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.574598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.574618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.574626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.585225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.585245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.585253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.593715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.593736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.593745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.604181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.604202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.604210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.613599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.613619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.613627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.622472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.622492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.622501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.632333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.632353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.632361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.642177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.642197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.642209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.651036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.651057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.651065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 24864.00 IOPS, 97.12 MiB/s [2024-12-11T14:07:29.815Z] [2024-12-11 15:07:29.661450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.661470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.661479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.676390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.676410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.676417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.689585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.689605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.689614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.702303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.702323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.702331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.715076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.715097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.715105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.726811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.767 [2024-12-11 15:07:29.726831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.767 [2024-12-11 15:07:29.726840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.767 [2024-12-11 15:07:29.739747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.739767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.739775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.747924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.747945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.747953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.759195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.759224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.767456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.767477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.767485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.779901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.779922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.779930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.791566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.791587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.791595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.800304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.800324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.800332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.768 [2024-12-11 15:07:29.812418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:36.768 [2024-12-11 15:07:29.812439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.768 [2024-12-11 15:07:29.812447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.821261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.821282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.821290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.833917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.833939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.833950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.845002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.845022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.845031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.853697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.853717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.853726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.865843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.865863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.865871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.026 [2024-12-11 15:07:29.874234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.026 [2024-12-11 15:07:29.874255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.026 [2024-12-11 15:07:29.874263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.885377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.885398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.885406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.896746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.896766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.896775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.905791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.905811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.905819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.916808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.916828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.916836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.925456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.925480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.925488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.935002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.935022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.935031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.943639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.943660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.943669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.952920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.952941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.952949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.962711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.962731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.962739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.974248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.974269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.974277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.984790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.984812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.984820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:29.995146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:29.995174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:29.995182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.003825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.003847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.003857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.014938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.014963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.014973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.028670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.028693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.028702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.037385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.037408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.037417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.049634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.049658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.049667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.061211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.061233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.061242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.027 [2024-12-11 15:07:30.070295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.027 [2024-12-11 15:07:30.070317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.027 [2024-12-11 15:07:30.070327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.081108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.081130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.081139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.091376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.091398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.091406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.100862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.100884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.100897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.110587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.110610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.110619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.120295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.120316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.120325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.131209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.131230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.131239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.139523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.139545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.139553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.151271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.151293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.151302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.161988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.162010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.162019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.172760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.172782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.172790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.184225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.184246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.184255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.286 [2024-12-11 15:07:30.197386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.286 [2024-12-11 15:07:30.197410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.286 [2024-12-11 15:07:30.197419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.208141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.208168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.208177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.217276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.217298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.217306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.227335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.227357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.227366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.237908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.237931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.237940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.246865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.246886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.258807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.258828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.258837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.267366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.267388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.267396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.280435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.280457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.280465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.292239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.292261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.292269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.300792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.300814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.312270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.312291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.312301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.321846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.321867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.321876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.287 [2024-12-11 15:07:30.331760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.287 [2024-12-11 15:07:30.331781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.287 [2024-12-11 15:07:30.331790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.341583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.341605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.341613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.351147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.351174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.351183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.361850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.361870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.361879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.372999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.373020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.373035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.383337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.383358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.383365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.395346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.395366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.395375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.406420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.406441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.406450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.414727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.414747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.414755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.426969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.426990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.426998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.439407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.439429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.439438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.450016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.450038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.450046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.545 [2024-12-11 15:07:30.459011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.545 [2024-12-11 15:07:30.459033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.545 [2024-12-11 15:07:30.459041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.470677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.470698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.470706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.481633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.481654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.481662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.490564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.490584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.490592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.503548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.503569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.503578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.516519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.516539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.516548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.525269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.525290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.525298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.537344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.537365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.537373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.549509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.549529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.549537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.563740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.563761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.563772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.574433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.574453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.574462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.546 [2024-12-11 15:07:30.588499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.546 [2024-12-11 15:07:30.588520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.546 [2024-12-11 15:07:30.588529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.602167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.602189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.602198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.612421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.612441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.612449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.620955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.620975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.620983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.632009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.632030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.632038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.641818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.641838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.641847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 [2024-12-11 15:07:30.653798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.653819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.653827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 24293.00 IOPS, 94.89 MiB/s [2024-12-11T14:07:30.852Z] [2024-12-11 15:07:30.663455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb19300) 00:26:37.804 [2024-12-11 15:07:30.663479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.804 [2024-12-11 15:07:30.663488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.804 00:26:37.804 Latency(us) 00:26:37.804 [2024-12-11T14:07:30.852Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.804 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:37.804 nvme0n1 : 2.04 23824.72 93.07 0.00 0.00 5261.02 2721.17 45134.36 00:26:37.804 [2024-12-11T14:07:30.852Z] =================================================================================================================== 00:26:37.804 [2024-12-11T14:07:30.852Z] Total : 23824.72 93.07 0.00 0.00 5261.02 2721.17 45134.36 00:26:37.804 { 00:26:37.804 "results": [ 00:26:37.804 { 00:26:37.804 "job": "nvme0n1", 00:26:37.804 "core_mask": "0x2", 00:26:37.804 "workload": "randread", 00:26:37.804 "status": "finished", 00:26:37.804 "queue_depth": 128, 00:26:37.804 "io_size": 4096, 00:26:37.804 "runtime": 2.044683, 00:26:37.804 "iops": 23824.72001772402, 00:26:37.804 "mibps": 93.06531256923445, 00:26:37.804 "io_failed": 0, 00:26:37.804 "io_timeout": 0, 00:26:37.804 "avg_latency_us": 5261.0178607703165, 00:26:37.804 "min_latency_us": 2721.168695652174, 00:26:37.804 "max_latency_us": 45134.358260869565 00:26:37.804 } 00:26:37.804 ], 00:26:37.804 "core_count": 1 00:26:37.804 } 00:26:37.804 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:37.804 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:37.804 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:37.804 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:37.804 | .driver_specific 00:26:37.804 | .nvme_error 00:26:37.804 | .status_code 00:26:37.804 | .command_transient_transport_error' 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 191 > 0 )) 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3259296 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3259296 ']' 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3259296 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259296 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259296' 00:26:38.062 killing process with pid 3259296 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3259296 00:26:38.062 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.062 00:26:38.062 Latency(us) 00:26:38.062 [2024-12-11T14:07:31.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.062 [2024-12-11T14:07:31.110Z] =================================================================================================================== 00:26:38.062 [2024-12-11T14:07:31.110Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.062 15:07:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3259296 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3259778 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3259778 /var/tmp/bperf.sock 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3259778 ']' 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.320 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.320 [2024-12-11 15:07:31.191832] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:38.321 [2024-12-11 15:07:31.191880] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259778 ] 00:26:38.321 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.321 Zero copy mechanism will not be used. 00:26:38.321 [2024-12-11 15:07:31.267861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.321 [2024-12-11 15:07:31.310492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.578 15:07:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.143 nvme0n1 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.143 15:07:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.143 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.143 Zero copy mechanism will not be used. 00:26:39.143 Running I/O for 2 seconds... 00:26:39.143 [2024-12-11 15:07:32.126289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.143 [2024-12-11 15:07:32.126322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.143 [2024-12-11 15:07:32.126333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.143 [2024-12-11 15:07:32.132779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.143 [2024-12-11 15:07:32.132807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.132817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.138219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.138251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.143950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.143971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.143979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.149511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.149533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.149541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.154989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.155011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.155020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.160465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.160487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.160495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.165980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.166002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.166011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.169251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.169271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.169279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.174091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.174113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.174121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.179584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.179605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.179613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.144 [2024-12-11 15:07:32.185182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.144 [2024-12-11 15:07:32.185202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.144 [2024-12-11 15:07:32.185210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.191048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.191070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.191079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.196392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.196414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.196423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.201560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.201582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.201590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.207144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.207174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.207186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.212461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.212482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.212490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.217780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.217801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.217810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.223092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.223115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.223123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.228480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.228502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.228510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.234296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.234318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.234327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.240259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.240281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.240289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.245714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.245735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.245743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.251270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.251291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.251299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.256851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.256873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.256881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.262392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.262414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.262422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.268117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.268140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.268148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.274717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.274740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.274749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.281512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.281533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.281541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.289378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.289400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.289409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.297107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.297130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.297139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.304624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.304647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.304656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.312944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.312968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.312981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.321243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.321267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.321276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.329468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.329492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.329501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.403 [2024-12-11 15:07:32.338383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.403 [2024-12-11 15:07:32.338406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.403 [2024-12-11 15:07:32.338414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.346375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.346398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.346407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.354971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.355000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.355009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.362652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.362675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.362684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.370624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.370647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.370656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.378823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.378846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.378855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.386881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.386911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.386921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.395273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.395295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.395304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.401547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.401569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.401578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.407902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.407924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.407933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.414246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.414268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.420230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.420253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.420262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.427550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.427572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.427581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.434819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.434841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.434850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.404 [2024-12-11 15:07:32.441792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.404 [2024-12-11 15:07:32.441813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.404 [2024-12-11 15:07:32.441822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.449423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.449446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.449455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.457034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.457056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.457064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.465193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.465215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.465224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.472336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.472359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.472368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.477931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.477953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.477961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.483298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.483319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.483327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.488595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.488616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.488625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.493978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.493999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.494008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.499364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.499385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.499398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.504768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.663 [2024-12-11 15:07:32.504790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.663 [2024-12-11 15:07:32.504798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.663 [2024-12-11 15:07:32.510129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.510151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.510165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.515763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.515785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.515793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.521268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.521290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.521299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.526578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.526599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.526607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.531854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.531876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.531884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.537233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.537254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.537262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.542768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.542791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.542800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.548250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.548271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.548279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.553728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.553749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.553757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.559279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.559301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.559310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.564637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.564658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.564666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.569979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.570001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.570009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.575389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.575410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.575418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.580887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.580909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.580917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.586235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.586256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.586264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.591768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.591790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.591801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.597359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.597381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.597390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.602754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.602775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.602783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.608210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.608230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.608238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.613644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.613665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.613673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.619065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.619087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.619095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.624453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.624474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.624482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.629767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.629788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.629796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.635104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.635127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.635135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.640369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.640395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.640404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.645753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.645775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.645784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.651195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.651217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.651225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.656558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.664 [2024-12-11 15:07:32.656580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.664 [2024-12-11 15:07:32.656588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.664 [2024-12-11 15:07:32.661927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.661948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.661956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.667486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.667508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.667517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.672972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.672994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.673002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.678374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.678395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.678403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.683798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.683820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.683828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.689074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.689095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.689103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.694430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.694451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.694459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.699746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.699767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.699776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.665 [2024-12-11 15:07:32.705029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.665 [2024-12-11 15:07:32.705051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.665 [2024-12-11 15:07:32.705059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.710653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.710674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.710682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.716178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.716199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.716207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.721695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.721715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.721723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.727260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.727281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.727290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.732789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.732810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.732822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.738168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.738189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.738198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.743583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.743605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.743613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.748928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.748949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.748957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.754279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.754301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.754309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.759682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.759703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.759712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.765026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.765048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.765056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.770475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.770496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.770505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.776465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.776487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.776496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.781902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.781927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.781935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.787241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.787262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.787271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.792553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.792574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.792583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.799151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.799180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.799188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.924 [2024-12-11 15:07:32.804950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.924 [2024-12-11 15:07:32.804978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.924 [2024-12-11 15:07:32.804987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.812087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.812109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.812118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.819733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.819756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.827301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.827323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.827332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.834925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.834947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.834956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.842357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.842379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.849521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.849544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.849553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.857691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.857714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.857723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.865723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.865747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.865755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.873241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.873264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.873273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.880925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.880950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.880959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.888450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.888473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.888482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.896450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.896474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.896483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.904066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.904089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.904102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.911450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.911474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.911483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.918584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.918608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.918616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.925358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.925382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.925391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.932637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.932661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.932670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.940506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.940529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.940538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.948024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.948047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.948056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.955840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.955863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.955872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.925 [2024-12-11 15:07:32.963520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:39.925 [2024-12-11 15:07:32.963543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.925 [2024-12-11 15:07:32.963552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:32.971558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:32.971581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:32.971590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:32.980050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:32.980073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:32.980082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:32.988582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:32.988605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:32.988613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:32.996292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:32.996316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:32.996325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.003659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.003681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.003690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.011923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.011947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.011955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.019367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.019389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.019397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.026859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.026883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.026891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.034005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.034027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.034039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.041285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.041312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.041321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.048756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.048780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.048788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.056100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.056122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.056131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.062013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.062037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.062047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.067997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.068020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.068029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.074082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.074105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.074113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.080716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.080738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.080747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.088048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.088070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.088078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.095662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.095689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.095698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.102315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.102338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.102347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.107875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.107897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.107905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.113099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.113123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.113132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.118321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.118342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.118350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.185 4915.00 IOPS, 614.38 MiB/s [2024-12-11T14:07:33.233Z] [2024-12-11 15:07:33.124413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.124436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.124444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.129659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.129681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.129690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.134888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.134910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.134919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.185 [2024-12-11 15:07:33.140194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.185 [2024-12-11 15:07:33.140216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.185 [2024-12-11 15:07:33.140225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.145430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.145464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.145473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.150665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.150688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.150696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.155838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.155860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.155868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.160996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.161017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.161025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.166181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.166202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.166210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.171384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.171407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.171415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.176690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.176712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.176720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.182014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.182036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.182044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.187240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.187261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.187276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.192205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.192227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.192235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.197410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.197433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.197441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.202453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.202474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.202483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.207396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.207417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.207426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.212532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.212554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.212562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.217715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.217737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.217745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.223007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.223029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.223037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.186 [2024-12-11 15:07:33.228282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.186 [2024-12-11 15:07:33.228304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.186 [2024-12-11 15:07:33.228312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.233464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.233485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.233494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.238662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.238683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.238692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.243838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.243859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.243867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.249053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.249075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.249083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.254273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.254295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.254303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.259546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.259568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.265795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.445 [2024-12-11 15:07:33.265818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.445 [2024-12-11 15:07:33.265826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.445 [2024-12-11 15:07:33.272984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.273008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.273016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.280006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.280028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.280040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.287047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.287070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.287079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.294524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.294548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.294557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.301872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.301893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.301902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.309232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.309255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.309263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.316900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.316923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.316932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.324343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.324366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.324374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.328366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.328387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.328396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.332556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.332577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.332585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.337744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.337770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.337779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.342942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.342964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.342972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.348815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.348837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.348846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.356124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.356146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.356154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.364433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.364456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.364465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.373027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.373049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.373058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.381021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.381044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.381052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.389065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.389099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.396609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.396632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.396641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.404043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.404066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.404075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.411903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.411926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.411935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.419524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.419547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.419555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.426960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.426982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.426990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.434662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.434685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.434694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.442721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.442743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.442751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.450369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.450392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.450400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.458846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.458869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.458878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.446 [2024-12-11 15:07:33.466470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.446 [2024-12-11 15:07:33.466492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.446 [2024-12-11 15:07:33.466505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.447 [2024-12-11 15:07:33.474468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.447 [2024-12-11 15:07:33.474491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-12-11 15:07:33.474500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.447 [2024-12-11 15:07:33.483116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.447 [2024-12-11 15:07:33.483138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.447 [2024-12-11 15:07:33.483147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.491055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.491079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.491089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.497026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.497048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.497057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.503817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.503840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.503849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.509497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.509519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.509527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.514805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.514826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.514835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.520056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.520077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.520086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.525318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.525343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.525351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.530549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.530570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.530578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.535783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.535803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.535811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.540962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.540983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.540992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.546181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.546202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.546212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.551390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.551411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.551420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.556611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.556632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.556640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.561834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.561857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.561865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.567081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.567102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.567113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.572236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.572258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.572266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.577456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.577477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.577485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.582750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.582772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.582780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.587647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.587669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.587677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.593042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.593064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.593073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.598251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.598272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.598281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.603531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.603553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.603561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.608771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.608793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.608800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.614069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.614094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.614103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.619300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.619321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.619329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.707 [2024-12-11 15:07:33.624485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.707 [2024-12-11 15:07:33.624507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.707 [2024-12-11 15:07:33.624516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.629673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.629695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.629703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.635464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.635485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.635493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.641649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.641671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.641679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.646990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.647015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.647023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.652276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.652298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.652307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.657529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.657551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.662761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.662783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.662792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.667920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.667941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.667950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.673052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.673073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.673081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.678243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.678265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.678273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.683503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.683525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.683533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.688773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.688795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.688802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.694009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.694030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.694038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.699946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.699969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.699977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.705605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.705628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.705640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.710839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.710861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.710869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.716037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.716059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.716067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.721233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.721255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.721263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.726377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.726398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.726407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.731625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.731646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.731656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.736830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.736852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.736861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.742019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.742040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.742048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.708 [2024-12-11 15:07:33.747243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.708 [2024-12-11 15:07:33.747267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.708 [2024-12-11 15:07:33.747275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.752556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.752582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.752590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.758220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.758242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.758250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.764015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.764037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.764045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.769220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.769242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.769250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.774415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.774436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.779714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.779735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.779743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.784933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.784954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.784963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.790120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.790141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.790149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.795350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.795371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.800663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.800684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.800692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.805912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.805933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.805941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.811213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.811234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.811241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.816397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.816418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.816425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.821567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.821588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.821596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.826786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.826807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.826815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.831964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.831986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.831993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.837203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.837224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.837232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.842511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.842532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.842543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.847757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.847787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.852955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.852976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.858185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.858206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.858214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.863393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.863414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.863422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.868620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.868642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.868650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.873835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.873856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.873864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.879124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.879145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.879153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.884394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.884414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.884422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.969 [2024-12-11 15:07:33.889645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.969 [2024-12-11 15:07:33.889667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.969 [2024-12-11 15:07:33.889675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.894893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.894914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.900192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.900213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.900221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.905451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.905472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.905480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.910692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.910713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.910721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.916581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.916603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.916611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.919788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.919810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.919818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.925074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.925095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.925103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.930248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.930269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.930283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.935441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.935463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.935471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.940701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.940730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.945929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.945950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.945958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.951216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.951236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.951244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.956429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.956450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.956458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.961671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.961692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.961700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.966885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.966906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.966913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.972179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.972199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.972207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.977349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.977375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.977385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.982558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.982581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.982589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.987801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.987821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.987829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.992933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.992954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.992962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:33.998201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:33.998221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:33.998229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:34.003334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:34.003354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:34.003362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:40.970 [2024-12-11 15:07:34.008584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:40.970 [2024-12-11 15:07:34.008604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.970 [2024-12-11 15:07:34.008612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.013892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.013913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.019219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.019240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.019248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.024339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.024360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.024368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.029571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.029592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.029599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.034856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.034877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.034886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.040173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.040193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.040201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.045489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.045510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.045520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.050821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.050841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.050850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.056175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.056196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.056204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.061510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.061530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.061538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.066766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.066787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.066799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.072009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.072031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.072039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.077299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.077321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.077328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.082584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.082605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.082616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.087859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.087881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.087890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.093264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.093286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.093294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.098819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.098840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.098848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.104073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.104094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.104103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.109617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.109639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.109648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.115446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.115472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.115480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.230 [2024-12-11 15:07:34.120703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.120725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.120733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.230 5219.00 IOPS, 652.38 MiB/s [2024-12-11T14:07:34.278Z] [2024-12-11 15:07:34.126890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ebbf80) 00:26:41.230 [2024-12-11 15:07:34.126913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.230 [2024-12-11 15:07:34.126921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.230 00:26:41.230 Latency(us) 00:26:41.230 [2024-12-11T14:07:34.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.230 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:41.230 nvme0n1 : 2.00 5218.22 652.28 0.00 0.00 3063.16 762.21 9175.04 00:26:41.230 [2024-12-11T14:07:34.278Z] =================================================================================================================== 00:26:41.230 [2024-12-11T14:07:34.278Z] Total : 5218.22 652.28 0.00 0.00 3063.16 762.21 9175.04 00:26:41.231 { 00:26:41.231 "results": [ 00:26:41.231 { 00:26:41.231 "job": "nvme0n1", 00:26:41.231 "core_mask": "0x2", 00:26:41.231 "workload": "randread", 00:26:41.231 "status": "finished", 00:26:41.231 "queue_depth": 16, 00:26:41.231 "io_size": 131072, 00:26:41.231 "runtime": 2.003366, 00:26:41.231 "iops": 5218.217739544347, 00:26:41.231 "mibps": 652.2772174430434, 00:26:41.231 "io_failed": 0, 00:26:41.231 "io_timeout": 0, 00:26:41.231 "avg_latency_us": 3063.164304406052, 00:26:41.231 "min_latency_us": 762.2121739130434, 00:26:41.231 "max_latency_us": 9175.04 00:26:41.231 } 00:26:41.231 ], 00:26:41.231 "core_count": 1 00:26:41.231 } 00:26:41.231 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.231 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.231 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.231 | .driver_specific 00:26:41.231 | .nvme_error 00:26:41.231 | .status_code 00:26:41.231 | .command_transient_transport_error' 00:26:41.231 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 337 > 0 )) 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3259778 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3259778 ']' 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3259778 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259778 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259778' 00:26:41.490 killing process with pid 3259778 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3259778 00:26:41.490 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.490 00:26:41.490 Latency(us) 00:26:41.490 [2024-12-11T14:07:34.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.490 [2024-12-11T14:07:34.538Z] =================================================================================================================== 00:26:41.490 [2024-12-11T14:07:34.538Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.490 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3259778 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3260386 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3260386 /var/tmp/bperf.sock 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3260386 ']' 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:41.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.748 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:41.748 [2024-12-11 15:07:34.604149] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:41.748 [2024-12-11 15:07:34.604202] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260386 ] 00:26:41.748 [2024-12-11 15:07:34.663470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.748 [2024-12-11 15:07:34.706918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.006 15:07:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.006 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.006 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.006 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:42.576 nvme0n1 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:42.576 15:07:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:42.576 Running I/O for 2 seconds... 00:26:42.576 [2024-12-11 15:07:35.546387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee23b8 00:26:42.576 [2024-12-11 15:07:35.547321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.547349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.556655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5220 00:26:42.576 [2024-12-11 15:07:35.557922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.557943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.564780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaab8 00:26:42.576 [2024-12-11 15:07:35.566036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.566056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.574975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb480 00:26:42.576 [2024-12-11 15:07:35.576022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.576041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.585435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb480 00:26:42.576 [2024-12-11 15:07:35.587043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.587062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.592113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5a90 00:26:42.576 [2024-12-11 15:07:35.592873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.592893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.602568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb8b8 00:26:42.576 [2024-12-11 15:07:35.603792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.603811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.612190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee6b70 00:26:42.576 [2024-12-11 15:07:35.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.576 [2024-12-11 15:07:35.613586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:42.576 [2024-12-11 15:07:35.620863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaab8 00:26:42.836 [2024-12-11 15:07:35.622283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.622303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.628938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eed4e8 00:26:42.836 [2024-12-11 15:07:35.629662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.629680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.638525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efac10 00:26:42.836 [2024-12-11 15:07:35.639374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.639393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.648134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9b30 00:26:42.836 [2024-12-11 15:07:35.649131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.649150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.657713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8a50 00:26:42.836 [2024-12-11 15:07:35.658828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.658846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.667011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0788 00:26:42.836 [2024-12-11 15:07:35.668139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.668164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.675173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efac10 00:26:42.836 [2024-12-11 15:07:35.675734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.675753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.685711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee1f80 00:26:42.836 [2024-12-11 15:07:35.686831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.686851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.694981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8618 00:26:42.836 [2024-12-11 15:07:35.696004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.696021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.705604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efac10 00:26:42.836 [2024-12-11 15:07:35.707152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.707177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.712313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee4578 00:26:42.836 [2024-12-11 15:07:35.713089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.713108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.723841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee88f8 00:26:42.836 [2024-12-11 15:07:35.725169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.725190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.732549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7818 00:26:42.836 [2024-12-11 15:07:35.733559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.733578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.741655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8e88 00:26:42.836 [2024-12-11 15:07:35.742572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.742591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.751826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016edfdc0 00:26:42.836 [2024-12-11 15:07:35.753031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.836 [2024-12-11 15:07:35.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:42.836 [2024-12-11 15:07:35.760587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3060 00:26:42.837 [2024-12-11 15:07:35.761536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.761555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.768938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:42.837 [2024-12-11 15:07:35.769881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.769900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.778522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eec408 00:26:42.837 [2024-12-11 15:07:35.779586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.779606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.788361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eee190 00:26:42.837 [2024-12-11 15:07:35.789606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.789625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.797545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0ff8 00:26:42.837 [2024-12-11 15:07:35.798531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.807118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc998 00:26:42.837 [2024-12-11 15:07:35.808376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.808396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.816315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ede470 00:26:42.837 [2024-12-11 15:07:35.817273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.817293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.825616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaef0 00:26:42.837 [2024-12-11 15:07:35.826858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.826876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.835169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eddc00 00:26:42.837 [2024-12-11 15:07:35.836516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.844730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc560 00:26:42.837 [2024-12-11 15:07:35.846191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.846211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.851303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:42.837 [2024-12-11 15:07:35.852019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.852038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.860867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8e88 00:26:42.837 [2024-12-11 15:07:35.861724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.861741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.869903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4298 00:26:42.837 [2024-12-11 15:07:35.870522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.870541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:42.837 [2024-12-11 15:07:35.879381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeea00 00:26:42.837 [2024-12-11 15:07:35.880269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.837 [2024-12-11 15:07:35.880287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.888705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef96f8 00:26:43.096 [2024-12-11 15:07:35.889334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.889355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.897100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4b08 00:26:43.096 [2024-12-11 15:07:35.897715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.897733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.908167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee2c28 00:26:43.096 [2024-12-11 15:07:35.909288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.909308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.917201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016edf550 00:26:43.096 [2024-12-11 15:07:35.918070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.918089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.925772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee1710 00:26:43.096 [2024-12-11 15:07:35.926614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.926633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.934833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eef6a8 00:26:43.096 [2024-12-11 15:07:35.935735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.935753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.946073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee0630 00:26:43.096 [2024-12-11 15:07:35.947514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.947533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.952646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5ec8 00:26:43.096 [2024-12-11 15:07:35.953320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.953339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.963869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6890 00:26:43.096 [2024-12-11 15:07:35.965049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.965070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.972918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef7970 00:26:43.096 [2024-12-11 15:07:35.973851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.973870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.982303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efdeb0 00:26:43.096 [2024-12-11 15:07:35.983495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.096 [2024-12-11 15:07:35.983514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.096 [2024-12-11 15:07:35.990865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016edfdc0 00:26:43.096 [2024-12-11 15:07:35.991781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:35.991804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:35.999972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3498 00:26:43.097 [2024-12-11 15:07:36.000933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.000952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.011094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeea00 00:26:43.097 [2024-12-11 15:07:36.012552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.012572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.017541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc560 00:26:43.097 [2024-12-11 15:07:36.018141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.018163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.026626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc998 00:26:43.097 [2024-12-11 15:07:36.027352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.027371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.038446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016edece0 00:26:43.097 [2024-12-11 15:07:36.039906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.039925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.044881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:43.097 [2024-12-11 15:07:36.045480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.045499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.054597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc998 00:26:43.097 [2024-12-11 15:07:36.055485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.055504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.064148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5a90 00:26:43.097 [2024-12-11 15:07:36.064984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.073315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7818 00:26:43.097 [2024-12-11 15:07:36.074196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.074215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.082383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee1b48 00:26:43.097 [2024-12-11 15:07:36.083054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.083074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.092632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0350 00:26:43.097 [2024-12-11 15:07:36.093759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.093778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.102228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef5378 00:26:43.097 [2024-12-11 15:07:36.103457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.103476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.110771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0bc0 00:26:43.097 [2024-12-11 15:07:36.111732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.111752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.119790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6cc8 00:26:43.097 [2024-12-11 15:07:36.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.120824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.130963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee84c0 00:26:43.097 [2024-12-11 15:07:36.132472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.132492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.097 [2024-12-11 15:07:36.138433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eee5c8 00:26:43.097 [2024-12-11 15:07:36.139476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.097 [2024-12-11 15:07:36.139495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.150029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef1868 00:26:43.356 [2024-12-11 15:07:36.151615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.151634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.156774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016edf550 00:26:43.356 [2024-12-11 15:07:36.157603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.157622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.168909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb480 00:26:43.356 [2024-12-11 15:07:36.170483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.170502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.176298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee84c0 00:26:43.356 [2024-12-11 15:07:36.177316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.177334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.187256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee84c0 00:26:43.356 [2024-12-11 15:07:36.188912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.188930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.194089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee9e10 00:26:43.356 [2024-12-11 15:07:36.194959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.194977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.203676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4298 00:26:43.356 [2024-12-11 15:07:36.204683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.204701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.214374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb480 00:26:43.356 [2024-12-11 15:07:36.215637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.356 [2024-12-11 15:07:36.215656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.356 [2024-12-11 15:07:36.222072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb8b8 00:26:43.356 [2024-12-11 15:07:36.222763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.222782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.231786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee0a68 00:26:43.357 [2024-12-11 15:07:36.232826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.232849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.241590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efa3a0 00:26:43.357 [2024-12-11 15:07:36.242746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.242764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.250252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4b08 00:26:43.357 [2024-12-11 15:07:36.251009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.251027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.259264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc560 00:26:43.357 [2024-12-11 15:07:36.260055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.260073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.268441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef7970 00:26:43.357 [2024-12-11 15:07:36.269205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.269223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.277599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef35f0 00:26:43.357 [2024-12-11 15:07:36.278364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.278382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.286855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eef270 00:26:43.357 [2024-12-11 15:07:36.287622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.287640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.296085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ede038 00:26:43.357 [2024-12-11 15:07:36.296853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.296871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.305255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3060 00:26:43.357 [2024-12-11 15:07:36.306050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.306068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.314676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee88f8 00:26:43.357 [2024-12-11 15:07:36.315462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.315486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.323910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee49b0 00:26:43.357 [2024-12-11 15:07:36.324680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.324698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.333127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7818 00:26:43.357 [2024-12-11 15:07:36.333897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.333915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.342297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaef0 00:26:43.357 [2024-12-11 15:07:36.343058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.343076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.351452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efe2e8 00:26:43.357 [2024-12-11 15:07:36.352241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.352259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.360622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eec408 00:26:43.357 [2024-12-11 15:07:36.361392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.361410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.369780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0788 00:26:43.357 [2024-12-11 15:07:36.370688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.370706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.379070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef57b0 00:26:43.357 [2024-12-11 15:07:36.379838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.379856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.388402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5658 00:26:43.357 [2024-12-11 15:07:36.389181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.389199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.357 [2024-12-11 15:07:36.399043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee4140 00:26:43.357 [2024-12-11 15:07:36.400320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.357 [2024-12-11 15:07:36.400338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.407827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeea00 00:26:43.617 [2024-12-11 15:07:36.408664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.408684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.417315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee23b8 00:26:43.617 [2024-12-11 15:07:36.418012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.418030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.427848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efa7d8 00:26:43.617 [2024-12-11 15:07:36.429346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.429364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.434306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaef0 00:26:43.617 [2024-12-11 15:07:36.434934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.434952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.443029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee99d8 00:26:43.617 [2024-12-11 15:07:36.443688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.443706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.453299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6020 00:26:43.617 [2024-12-11 15:07:36.454067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.454085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.462724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ede038 00:26:43.617 [2024-12-11 15:07:36.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.463625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.472021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee6fa8 00:26:43.617 [2024-12-11 15:07:36.472941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.472960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.481142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efa3a0 00:26:43.617 [2024-12-11 15:07:36.482043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.482061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.489795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eecc78 00:26:43.617 [2024-12-11 15:07:36.490673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.490691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.499468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eed920 00:26:43.617 [2024-12-11 15:07:36.500473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.500491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.508086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efef90 00:26:43.617 [2024-12-11 15:07:36.508733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.508751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.517109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaab8 00:26:43.617 [2024-12-11 15:07:36.517754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.517773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.526300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5a90 00:26:43.617 [2024-12-11 15:07:36.526939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.526958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:43.617 27334.00 IOPS, 106.77 MiB/s [2024-12-11T14:07:36.665Z] [2024-12-11 15:07:36.535451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.536103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.536123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.544561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.545227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.545245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.553748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.554425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.554446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.562931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.563614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.563633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.572393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.573053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.573072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.581731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.582417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.582436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.590952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.591796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.591816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.600285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.600942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.617 [2024-12-11 15:07:36.600961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.617 [2024-12-11 15:07:36.609413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.617 [2024-12-11 15:07:36.610043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.610061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.618 [2024-12-11 15:07:36.618706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.618 [2024-12-11 15:07:36.619365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.619384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.618 [2024-12-11 15:07:36.627882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.618 [2024-12-11 15:07:36.628533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.628552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.618 [2024-12-11 15:07:36.637001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.618 [2024-12-11 15:07:36.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:25594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.637693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.618 [2024-12-11 15:07:36.646125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.618 [2024-12-11 15:07:36.646782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.646801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.618 [2024-12-11 15:07:36.655279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.618 [2024-12-11 15:07:36.655928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.618 [2024-12-11 15:07:36.655946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.877 [2024-12-11 15:07:36.664640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.877 [2024-12-11 15:07:36.665293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.877 [2024-12-11 15:07:36.665312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.877 [2024-12-11 15:07:36.673938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.877 [2024-12-11 15:07:36.674615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.877 [2024-12-11 15:07:36.674633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.877 [2024-12-11 15:07:36.683139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:43.877 [2024-12-11 15:07:36.683840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.877 [2024-12-11 15:07:36.683859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:43.877 [2024-12-11 15:07:36.694437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee95a0 00:26:43.877 [2024-12-11 15:07:36.695717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.695736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.702111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee01f8 00:26:43.878 [2024-12-11 15:07:36.702791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.702810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.711291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee01f8 00:26:43.878 [2024-12-11 15:07:36.712062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.712081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.720704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef35f0 00:26:43.878 [2024-12-11 15:07:36.721263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.721282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.730128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef7100 00:26:43.878 [2024-12-11 15:07:36.731008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.731027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.739586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef96f8 00:26:43.878 [2024-12-11 15:07:36.740263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.740282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.749194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeee38 00:26:43.878 [2024-12-11 15:07:36.749990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.750008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.758655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eee190 00:26:43.878 [2024-12-11 15:07:36.759818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.759837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.767302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efef90 00:26:43.878 [2024-12-11 15:07:36.768650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:17714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.768668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.775835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee73e0 00:26:43.878 [2024-12-11 15:07:36.776605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.776624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.785346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efac10 00:26:43.878 [2024-12-11 15:07:36.786207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.786226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.794906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef1430 00:26:43.878 [2024-12-11 15:07:36.795890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.795912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.803575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3d08 00:26:43.878 [2024-12-11 15:07:36.804560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.804579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.813980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eed920 00:26:43.878 [2024-12-11 15:07:36.815143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.815166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.823712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3d08 00:26:43.878 [2024-12-11 15:07:36.824973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.824991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.832484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eee190 00:26:43.878 [2024-12-11 15:07:36.833716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.833734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.842068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef3a28 00:26:43.878 [2024-12-11 15:07:36.843416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.843434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.851378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7c50 00:26:43.878 [2024-12-11 15:07:36.852731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.852749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.859403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9b30 00:26:43.878 [2024-12-11 15:07:36.860751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.860769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.867857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc128 00:26:43.878 [2024-12-11 15:07:36.868616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.868634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.877301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5ec8 00:26:43.878 [2024-12-11 15:07:36.878164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.878182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.886633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee8d30 00:26:43.878 [2024-12-11 15:07:36.887541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.887560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.896067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc560 00:26:43.878 [2024-12-11 15:07:36.896736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.896754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.906581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeee38 00:26:43.878 [2024-12-11 15:07:36.908042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.908061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:43.878 [2024-12-11 15:07:36.915111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee4578 00:26:43.878 [2024-12-11 15:07:36.916230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:43.878 [2024-12-11 15:07:36.916248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.137 [2024-12-11 15:07:36.924382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef2d80 00:26:44.137 [2024-12-11 15:07:36.925530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.137 [2024-12-11 15:07:36.925548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.137 [2024-12-11 15:07:36.933701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efd640 00:26:44.137 [2024-12-11 15:07:36.934816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.137 [2024-12-11 15:07:36.934834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.137 [2024-12-11 15:07:36.942868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eee5c8 00:26:44.137 [2024-12-11 15:07:36.944030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.137 [2024-12-11 15:07:36.944047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.951220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6020 00:26:44.138 [2024-12-11 15:07:36.952698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.952715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.959819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efda78 00:26:44.138 [2024-12-11 15:07:36.960569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.960587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.969259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efa7d8 00:26:44.138 [2024-12-11 15:07:36.970116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.970134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.977949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efe720 00:26:44.138 [2024-12-11 15:07:36.978796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.978815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.988258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee4de8 00:26:44.138 [2024-12-11 15:07:36.989288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.989306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:36.997390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7818 00:26:44.138 [2024-12-11 15:07:36.998417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:36.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.006568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef7970 00:26:44.138 [2024-12-11 15:07:37.007618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.007637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.015711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4b08 00:26:44.138 [2024-12-11 15:07:37.016736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.016755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.024924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eef270 00:26:44.138 [2024-12-11 15:07:37.025929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.025947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.034142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efd208 00:26:44.138 [2024-12-11 15:07:37.035167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.035189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.043309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee6b70 00:26:44.138 [2024-12-11 15:07:37.044309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.052517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaab8 00:26:44.138 [2024-12-11 15:07:37.053521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.053539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.061075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef35f0 00:26:44.138 [2024-12-11 15:07:37.062076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.062094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.071282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efc128 00:26:44.138 [2024-12-11 15:07:37.072464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.072482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.080717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef92c0 00:26:44.138 [2024-12-11 15:07:37.081872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.081893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.090122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eea680 00:26:44.138 [2024-12-11 15:07:37.091246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.091265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.099309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee2c28 00:26:44.138 [2024-12-11 15:07:37.100429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.100448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.108481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef5be8 00:26:44.138 [2024-12-11 15:07:37.109612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.117645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efb8b8 00:26:44.138 [2024-12-11 15:07:37.118787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.118806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.126802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eddc00 00:26:44.138 [2024-12-11 15:07:37.127956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.127974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.136017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:44.138 [2024-12-11 15:07:37.137065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.137083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.145170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8e88 00:26:44.138 [2024-12-11 15:07:37.146272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.146291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.153678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efda78 00:26:44.138 [2024-12-11 15:07:37.154938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.154957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.161572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef46d0 00:26:44.138 [2024-12-11 15:07:37.162317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.162336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.170879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee8d30 00:26:44.138 [2024-12-11 15:07:37.171620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.171638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:44.138 [2024-12-11 15:07:37.181461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6890 00:26:44.138 [2024-12-11 15:07:37.182659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.138 [2024-12-11 15:07:37.182678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.190730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee38d0 00:26:44.398 [2024-12-11 15:07:37.191755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.191774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.200946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee3060 00:26:44.398 [2024-12-11 15:07:37.202219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.202238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.209736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef57b0 00:26:44.398 [2024-12-11 15:07:37.210898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.210917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.218440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4f40 00:26:44.398 [2024-12-11 15:07:37.219468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.219487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.226097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee7c50 00:26:44.398 [2024-12-11 15:07:37.226763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.226781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.235481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee23b8 00:26:44.398 [2024-12-11 15:07:37.236113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.236131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.244650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef20d8 00:26:44.398 [2024-12-11 15:07:37.245292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.245311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.255202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eed4e8 00:26:44.398 [2024-12-11 15:07:37.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.256107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.264036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee5658 00:26:44.398 [2024-12-11 15:07:37.264925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.264943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.273069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:44.398 [2024-12-11 15:07:37.273714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.273736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.281478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef20d8 00:26:44.398 [2024-12-11 15:07:37.282099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.282117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.292510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef20d8 00:26:44.398 [2024-12-11 15:07:37.293623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.293641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.302117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaab8 00:26:44.398 [2024-12-11 15:07:37.303361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.303378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.311720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee1b48 00:26:44.398 [2024-12-11 15:07:37.313083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.313102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.318261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8e88 00:26:44.398 [2024-12-11 15:07:37.318883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.318902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.327858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef20d8 00:26:44.398 [2024-12-11 15:07:37.328664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.328684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.339236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efdeb0 00:26:44.398 [2024-12-11 15:07:37.340510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.340529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.347647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee12d8 00:26:44.398 [2024-12-11 15:07:37.348768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.348786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.356874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eebfd0 00:26:44.398 [2024-12-11 15:07:37.357925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.357946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.365030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef81e0 00:26:44.398 [2024-12-11 15:07:37.365621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.365640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.375500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef8a50 00:26:44.398 [2024-12-11 15:07:37.376341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.376360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.386394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef5be8 00:26:44.398 [2024-12-11 15:07:37.387938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.387957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.392912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef9f68 00:26:44.398 [2024-12-11 15:07:37.393599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.393617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.403332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eeaef0 00:26:44.398 [2024-12-11 15:07:37.404498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.404516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.411460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef0350 00:26:44.398 [2024-12-11 15:07:37.412151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.412174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.419891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee6b70 00:26:44.398 [2024-12-11 15:07:37.420557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.398 [2024-12-11 15:07:37.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:44.398 [2024-12-11 15:07:37.430872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee73e0 00:26:44.398 [2024-12-11 15:07:37.432029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.399 [2024-12-11 15:07:37.432046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.399 [2024-12-11 15:07:37.438868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef6890 00:26:44.399 [2024-12-11 15:07:37.439572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.399 [2024-12-11 15:07:37.439591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.448450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee0630 00:26:44.658 [2024-12-11 15:07:37.449136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.449155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.456940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efe2e8 00:26:44.658 [2024-12-11 15:07:37.457609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.457627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.467309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efd640 00:26:44.658 [2024-12-11 15:07:37.468221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.468240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.477991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eecc78 00:26:44.658 [2024-12-11 15:07:37.479403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.479421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.486059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016efa3a0 00:26:44.658 [2024-12-11 15:07:37.486778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.486796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.497067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee6300 00:26:44.658 [2024-12-11 15:07:37.498599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.498618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.505029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ef4298 00:26:44.658 [2024-12-11 15:07:37.506065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.506084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.514472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee0630 00:26:44.658 [2024-12-11 15:07:37.515636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.515655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.521841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016eed4e8 00:26:44.658 [2024-12-11 15:07:37.522505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.522524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:44.658 [2024-12-11 15:07:37.531529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8aae10) with pdu=0x200016ee73e0 00:26:44.658 [2024-12-11 15:07:37.532444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:44.658 [2024-12-11 15:07:37.532462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:44.658 27554.00 IOPS, 107.63 MiB/s 00:26:44.658 Latency(us) 00:26:44.658 [2024-12-11T14:07:37.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.658 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:44.658 nvme0n1 : 2.01 27537.68 107.57 0.00 0.00 4643.68 1852.10 14930.81 00:26:44.658 [2024-12-11T14:07:37.706Z] =================================================================================================================== 00:26:44.658 [2024-12-11T14:07:37.706Z] Total : 27537.68 107.57 0.00 0.00 4643.68 1852.10 14930.81 00:26:44.658 { 00:26:44.658 "results": [ 00:26:44.658 { 00:26:44.658 "job": "nvme0n1", 00:26:44.658 "core_mask": "0x2", 00:26:44.658 "workload": "randwrite", 00:26:44.658 "status": "finished", 00:26:44.658 "queue_depth": 128, 00:26:44.658 "io_size": 4096, 00:26:44.658 "runtime": 2.007068, 00:26:44.658 "iops": 27537.681832404283, 00:26:44.658 "mibps": 107.56906965782923, 00:26:44.658 "io_failed": 0, 00:26:44.658 "io_timeout": 0, 00:26:44.658 "avg_latency_us": 4643.684508208715, 00:26:44.658 "min_latency_us": 1852.104347826087, 00:26:44.658 "max_latency_us": 14930.810434782608 00:26:44.658 } 00:26:44.658 ], 00:26:44.658 "core_count": 1 00:26:44.658 } 00:26:44.658 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:44.658 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:44.658 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:44.658 | .driver_specific 00:26:44.658 | .nvme_error 00:26:44.658 | .status_code 00:26:44.658 | .command_transient_transport_error' 00:26:44.658 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 216 > 0 )) 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3260386 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3260386 ']' 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3260386 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260386 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260386' 00:26:44.917 killing process with pid 3260386 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3260386 00:26:44.917 Received shutdown signal, test time was about 2.000000 seconds 00:26:44.917 00:26:44.917 Latency(us) 00:26:44.917 [2024-12-11T14:07:37.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.917 [2024-12-11T14:07:37.965Z] =================================================================================================================== 00:26:44.917 [2024-12-11T14:07:37.965Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:44.917 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3260386 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3260937 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3260937 /var/tmp/bperf.sock 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3260937 ']' 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:45.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:45.176 15:07:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.176 [2024-12-11 15:07:38.004057] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:45.176 [2024-12-11 15:07:38.004102] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260937 ] 00:26:45.176 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:45.176 Zero copy mechanism will not be used. 00:26:45.176 [2024-12-11 15:07:38.062640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.176 [2024-12-11 15:07:38.105601] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:45.176 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.176 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:45.176 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.176 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:45.435 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.002 nvme0n1 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:46.002 15:07:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:46.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:46.002 Zero copy mechanism will not be used. 00:26:46.002 Running I/O for 2 seconds... 00:26:46.002 [2024-12-11 15:07:38.928808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.928896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.928923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.934146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.934230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.934252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.938957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.939026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.939047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.943682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.943747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.943767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.948558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.948615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.948635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.953521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.953587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.958133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.958228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.958248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.962861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.962921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.962939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.967726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.967792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.967812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.972325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.972412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.977076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.977134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.977153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.981725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.981786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.981804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.986480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.986555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.986575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.991123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.991204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.991223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:38.995797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:38.995855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:38.995873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.000283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.000355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.000375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.004745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.004809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.004827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.009311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.009375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.009393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.014059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.014120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.014137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.018803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.018866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.018883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.023308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.023384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.027899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.027965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.027985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.032673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.032738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.032761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.037322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.037390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.037408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.041965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.042021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.042039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.003 [2024-12-11 15:07:39.046683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.003 [2024-12-11 15:07:39.046741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.003 [2024-12-11 15:07:39.046759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.263 [2024-12-11 15:07:39.051393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.263 [2024-12-11 15:07:39.051446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.263 [2024-12-11 15:07:39.051464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.263 [2024-12-11 15:07:39.056134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.263 [2024-12-11 15:07:39.056210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.263 [2024-12-11 15:07:39.056229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.263 [2024-12-11 15:07:39.060709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.263 [2024-12-11 15:07:39.060763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.263 [2024-12-11 15:07:39.060781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.065580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.065734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.065753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.070835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.070997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.071016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.077195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.077292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.077317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.082302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.082435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.082455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.087357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.087418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.087436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.092212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.092291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.092310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.097985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.098151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.098176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.104198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.104283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.104303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.109948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.110113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.110132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.115758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.116050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.116071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.121788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.122145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.122170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.127804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.128113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.128132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.133708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.133964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.133983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.138488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.138747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.138766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.143288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.143543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.143563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.148114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.148365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.148385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.152666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.152922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.152941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.157965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.158306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.158325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.163924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.164193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.164212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.168640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.168895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.168914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.173863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.174117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.174137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.178638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.178900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.178919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.184055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.184315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.184336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.190076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.190343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.190363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.196426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.196686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.196706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.203214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.264 [2024-12-11 15:07:39.203541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.264 [2024-12-11 15:07:39.203560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.264 [2024-12-11 15:07:39.210605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.210858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.210878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.217568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.217918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.217937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.224325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.224584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.224607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.231406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.231638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.231657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.238543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.238834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.238853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.245901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.246236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.246256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.253026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.253349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.253369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.260667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.260949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.260969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.267358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.267661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.274074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.274424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.274444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.280807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.281122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.281141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.287201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.287526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.287545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.293859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.294171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.294191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.300833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.301139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.301164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.265 [2024-12-11 15:07:39.308052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.265 [2024-12-11 15:07:39.308309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.265 [2024-12-11 15:07:39.308330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.313193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.525 [2024-12-11 15:07:39.313428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.525 [2024-12-11 15:07:39.313448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.318962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.525 [2024-12-11 15:07:39.319265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.525 [2024-12-11 15:07:39.319284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.324836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.525 [2024-12-11 15:07:39.325069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.525 [2024-12-11 15:07:39.325089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.330096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.525 [2024-12-11 15:07:39.330365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.525 [2024-12-11 15:07:39.330384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.335725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.525 [2024-12-11 15:07:39.335972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.525 [2024-12-11 15:07:39.335991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.525 [2024-12-11 15:07:39.340560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.340793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.340812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.345405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.345647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.345668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.350672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.350919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.350938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.355910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.356176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.356195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.360493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.360738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.360757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.365042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.365285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.365305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.370069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.370318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.370337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.374842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.375087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.375105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.379402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.379643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.383995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.384245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.384264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.389199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.389436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.389455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.394380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.394620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.394639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.399313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.399569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.403861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.404102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.404121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.408874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.409121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.409140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.413769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.414006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.414025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.418545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.418782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.418801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.423124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.423371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.423391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.428279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.428515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.428534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.433112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.433353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.433389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.438741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.438986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.439006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.443957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.444198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.444218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.448836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.449059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.449078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.453724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.453960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.453980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.458754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.458995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.459014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.463642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.463886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.463904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.468230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.468462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.468482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.526 [2024-12-11 15:07:39.472992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.526 [2024-12-11 15:07:39.473511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.526 [2024-12-11 15:07:39.473531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.478353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.478588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.478607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.483180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.483420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.483440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.488268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.488485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.488505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.493181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.493410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.493429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.498530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.498769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.498789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.503911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.504139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.504163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.508746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.508997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.509020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.513544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.513781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.513801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.518297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.518535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.518555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.523340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.523585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.523604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.528809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.529049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.529068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.534087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.534327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.534346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.538938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.539185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.539204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.543827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.544083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.548767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.549009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.549029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.553780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.554024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.554044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.558809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.559050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.559069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.564076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.564314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.564334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.527 [2024-12-11 15:07:39.569060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.527 [2024-12-11 15:07:39.569306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.527 [2024-12-11 15:07:39.569326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.788 [2024-12-11 15:07:39.573910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.788 [2024-12-11 15:07:39.574151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.788 [2024-12-11 15:07:39.574176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.788 [2024-12-11 15:07:39.578950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.788 [2024-12-11 15:07:39.579208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.788 [2024-12-11 15:07:39.579228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.788 [2024-12-11 15:07:39.584200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.788 [2024-12-11 15:07:39.584439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.788 [2024-12-11 15:07:39.584458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.788 [2024-12-11 15:07:39.589602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.589838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.589857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.595607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.595841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.595862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.600583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.600818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.600838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.605116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.605387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.605407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.609476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.609715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.609734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.614073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.614332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.614351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.618683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.618929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.618949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.623232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.623473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.627697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.627937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.627956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.632290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.632527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.632546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.636743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.636984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.637007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.641167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.641409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.641429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.645644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.645886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.645906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.650469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.650709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.650729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.655707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.655951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.655971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.660842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.661081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.661101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.665461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.665702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.665721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.669885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.670120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.670139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.674365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.674607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.674627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.678750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.678993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.679012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.683102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.683354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.683374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.687669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.687918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.687939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.692633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.692871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.692891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.697954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.698207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.698228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.702568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.702804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.702824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.707697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.707927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.707947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.713516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.789 [2024-12-11 15:07:39.713737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.789 [2024-12-11 15:07:39.713757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.789 [2024-12-11 15:07:39.719855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.720150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.720175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.726769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.727018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.727038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.732785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.732991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.733011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.737626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.737835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.737855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.741807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.742005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.742025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.745925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.750040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.750252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.750272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.754129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.754346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.754366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.758226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.758433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.758453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.762261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.762474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.762496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.766326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.766537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.766556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.770434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.770647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.770665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.774515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.774723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.774742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.778565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.778773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.778792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.782643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.782854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.782873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.786729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.786940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.786959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.790766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.790989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.791008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.794848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.795060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.795080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.798904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.799113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.799132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.802935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.803172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.806988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.807202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.807222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.811018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.811233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.811252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.815069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.815282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.815301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.819078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.819287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.819306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.823030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.823232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.823252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.826955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.827151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:46.790 [2024-12-11 15:07:39.830923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:46.790 [2024-12-11 15:07:39.831120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.790 [2024-12-11 15:07:39.831139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.834984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.835188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.835208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.839627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.839814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.839834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.844578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.844751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.844772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.849173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.849331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.849351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.854170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.854334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.854354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.858918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.859095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.863492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.863677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.863697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.867897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.868110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.872354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.872540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.872559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.876910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.877064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.877084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.881918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.882094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.882114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.886774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.886961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.886980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.891476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.891668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.891688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.896077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.896257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.896276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.900882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.901065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.905813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.905982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.906002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.910659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.910810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.910829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.915215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.915379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.915402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.919698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.919873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.919893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.924128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.924350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.924371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.928568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.928760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.928780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 6179.00 IOPS, 772.38 MiB/s [2024-12-11T14:07:40.099Z] [2024-12-11 15:07:39.934419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.934477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.934496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.940248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.940327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.940347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.945151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.945216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.945235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.949948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.950053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.950073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.051 [2024-12-11 15:07:39.954607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.051 [2024-12-11 15:07:39.954669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.051 [2024-12-11 15:07:39.954688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.959369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.959472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.959491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.964124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.964240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.968999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.969104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.973800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.973901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.973920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.978547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.978616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.978634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.983345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.983395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.983412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.988223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.988292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.988312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.993374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.993456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.993474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:39.998654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:39.998804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:39.998827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.004957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.005108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.005136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.011847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.011994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.012020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.017545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.017610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.017633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.022664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.022741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.022761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.028055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.028213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.028235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.034399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.034553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.034573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.041013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.041168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.041188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.047494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.047617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.047645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.053937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.054100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.054125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.060132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.060287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.060307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.066799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.066944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.066964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.073489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.073681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.073701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.080280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.080663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.080683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.085222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.085499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.090173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.090422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.090444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.052 [2024-12-11 15:07:40.094970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.052 [2024-12-11 15:07:40.095233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.052 [2024-12-11 15:07:40.095253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.099881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.100133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.100155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.104769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.105020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.105041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.109425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.109691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.109711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.113904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.114167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.114188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.118349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.118606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.118626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.122836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.123099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.123119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.127396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.127665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.127685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.132507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.132838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.132858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.138457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.138794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.138814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.144435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.144762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.144782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.313 [2024-12-11 15:07:40.150285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.313 [2024-12-11 15:07:40.150614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.313 [2024-12-11 15:07:40.150634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.156283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.156591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.156611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.162237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.162539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.162559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.168238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.168521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.168541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.174758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.175001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.175020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.179492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.179732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.179752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.184370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.184625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.184645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.189283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.189522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.189543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.194268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.194543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.198873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.199127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.199147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.203477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.203719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.203739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.208133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.208399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.208419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.213518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.213812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.213832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.219586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.219822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.219842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.224570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.224823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.224843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.230125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.230407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.230427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.235926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.236180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.236201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.243363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.243673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.243693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.250509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.250771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.250791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.257084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.257431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.257451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.264569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.264850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.264870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.272331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.272629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.272649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.279533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.279760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.279779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.286554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.286892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.286913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.294181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.294523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.294542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.300811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.301053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.301073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.305939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.314 [2024-12-11 15:07:40.306179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.314 [2024-12-11 15:07:40.306199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.314 [2024-12-11 15:07:40.310673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.310911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.310931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.316206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.316429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.316449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.320950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.321172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.321192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.326194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.326420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.326440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.331059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.331312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.331332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.335869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.336113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.336134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.340419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.340654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.340674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.345218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.345482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.345506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.350772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.351003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.351023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.315 [2024-12-11 15:07:40.355628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.315 [2024-12-11 15:07:40.355874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.315 [2024-12-11 15:07:40.355895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.575 [2024-12-11 15:07:40.360657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.360894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.360914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.365826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.366066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.366086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.370373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.370610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.370629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.374941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.375201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.375220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.379450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.379708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.379728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.384016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.384265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.384284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.388694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.388938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.388958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.393206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.393445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.393465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.397634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.397892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.397912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.402084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.402339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.402359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.406627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.406857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.406876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.411538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.411772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.411792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.416817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.417066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.417086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.422145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.422379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.422398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.427230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.427461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.427480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.431895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.432167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.432188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.436482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.436748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.436767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.441036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.441277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.441298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.445546] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.445794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.445814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.450335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.450584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.450604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.454808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.455047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.455066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.459383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.459636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.459656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.464248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.464510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.464531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.469408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.469650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.469674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.474504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.474750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.474769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.479772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.480001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.480021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.485078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.576 [2024-12-11 15:07:40.485321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.576 [2024-12-11 15:07:40.485341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.576 [2024-12-11 15:07:40.489845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.490093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.490113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.494568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.494810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.494829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.499764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.499985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.500004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.504905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.505149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.505175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.509955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.510200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.510220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.514852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.515090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.515110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.519531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.519772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.519791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.524054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.524296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.524316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.528582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.528818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.528838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.533066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.533340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.533360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.537708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.537957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.537976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.542330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.542561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.542581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.546825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.547069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.547089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.551398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.551636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.551656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.555960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.556201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.556220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.560559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.560801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.560820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.565052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.565297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.565317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.569529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.569775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.569795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.574503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.574734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.574754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.579628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.579874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.579894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.584859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.585116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.590187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.590275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.590295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.595481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.595718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.595742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.600015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.600265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.600285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.604642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.577 [2024-12-11 15:07:40.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.577 [2024-12-11 15:07:40.604907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.577 [2024-12-11 15:07:40.609145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.578 [2024-12-11 15:07:40.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.578 [2024-12-11 15:07:40.609399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.578 [2024-12-11 15:07:40.614461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.578 [2024-12-11 15:07:40.614696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.578 [2024-12-11 15:07:40.614715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.578 [2024-12-11 15:07:40.619055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.578 [2024-12-11 15:07:40.619326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.578 [2024-12-11 15:07:40.619346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.838 [2024-12-11 15:07:40.623505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.838 [2024-12-11 15:07:40.623753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.838 [2024-12-11 15:07:40.623773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.838 [2024-12-11 15:07:40.627965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.838 [2024-12-11 15:07:40.628224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.838 [2024-12-11 15:07:40.628243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.838 [2024-12-11 15:07:40.632449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.838 [2024-12-11 15:07:40.632700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.838 [2024-12-11 15:07:40.632720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.838 [2024-12-11 15:07:40.636864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.838 [2024-12-11 15:07:40.637105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.838 [2024-12-11 15:07:40.637125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.641249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.641485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.641506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.645774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.646007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.646027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.650186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.650439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.650458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.654601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.654845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.654864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.659023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.659271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.659291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.663450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.663695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.663714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.667886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.668141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.668165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.672260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.672495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.672514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.676667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.676918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.676937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.681073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.681318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.681338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.685487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.685736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.689969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.690259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.690278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.694482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.694728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.694749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.698982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.699230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.699249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.703501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.703744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.703764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.707988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.708236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.708255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.712390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.712627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.712651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.716758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.717010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.717029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.721099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.721344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.721363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.725588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.725840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.725860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.730690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.730933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.730953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.735874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.736107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.736126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.740963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.741216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.741235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.746509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.746744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.746764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.751311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.751546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.751565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.756691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.756936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.756960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.761614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.761854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.839 [2024-12-11 15:07:40.761874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.839 [2024-12-11 15:07:40.767227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.839 [2024-12-11 15:07:40.767475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.767493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.774717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.774942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.774962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.781380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.781669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.781689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.787584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.787819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.787839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.794332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.794565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.794584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.800798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.801086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.801106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.805982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.806220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.806240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.811271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.811506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.811525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.816432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.816668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.816687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.821383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.821617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.821637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.826293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.826532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.826552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.831797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.832032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.832051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.836985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.837229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.837248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.842132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.842372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.842392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.847094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.847349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.847368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.852083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.852310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.852329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.857023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.857272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.857292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.862258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.862497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.862516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.867142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.867409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.871786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.872032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.876725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.876961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.876980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:47.840 [2024-12-11 15:07:40.881694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:47.840 [2024-12-11 15:07:40.881938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:47.840 [2024-12-11 15:07:40.881957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.886743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.886968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.886987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.892394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.892617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.892636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.897672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.897919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.897946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.902291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.902528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.902548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.906772] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.907015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.907034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.911212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.911450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.915707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.915954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.915973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.920240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.920478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.920498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.924688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.924920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.924940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.100 [2024-12-11 15:07:40.929350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.929589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.929608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.100 6116.50 IOPS, 764.56 MiB/s [2024-12-11T14:07:41.148Z] [2024-12-11 15:07:40.935066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8ab150) with pdu=0x200016eff3c8 00:26:48.100 [2024-12-11 15:07:40.935139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.100 [2024-12-11 15:07:40.935163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.100 00:26:48.100 Latency(us) 00:26:48.100 [2024-12-11T14:07:41.148Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.100 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:48.100 nvme0n1 : 2.00 6113.99 764.25 0.00 0.00 2612.27 1517.30 7864.32 00:26:48.100 [2024-12-11T14:07:41.148Z] =================================================================================================================== 00:26:48.100 [2024-12-11T14:07:41.148Z] Total : 6113.99 764.25 0.00 0.00 2612.27 1517.30 7864.32 00:26:48.100 { 00:26:48.100 "results": [ 00:26:48.100 { 00:26:48.100 "job": "nvme0n1", 00:26:48.100 "core_mask": "0x2", 00:26:48.100 "workload": "randwrite", 00:26:48.100 "status": "finished", 00:26:48.100 "queue_depth": 16, 00:26:48.100 "io_size": 131072, 00:26:48.100 "runtime": 2.003437, 00:26:48.100 "iops": 6113.993102852747, 00:26:48.100 "mibps": 764.2491378565934, 00:26:48.100 "io_failed": 0, 00:26:48.100 "io_timeout": 0, 00:26:48.100 "avg_latency_us": 2612.268863403224, 00:26:48.100 "min_latency_us": 1517.3008695652175, 00:26:48.100 "max_latency_us": 7864.32 00:26:48.100 } 00:26:48.100 ], 00:26:48.100 "core_count": 1 00:26:48.100 } 00:26:48.100 15:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:48.100 15:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:48.100 15:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:48.100 | .driver_specific 00:26:48.100 | .nvme_error 00:26:48.100 | .status_code 00:26:48.100 | .command_transient_transport_error' 00:26:48.100 15:07:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 396 > 0 )) 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3260937 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3260937 ']' 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3260937 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260937 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260937' 00:26:48.360 killing process with pid 3260937 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3260937 00:26:48.360 Received shutdown signal, test time was about 2.000000 seconds 00:26:48.360 00:26:48.360 Latency(us) 00:26:48.360 [2024-12-11T14:07:41.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.360 [2024-12-11T14:07:41.408Z] =================================================================================================================== 00:26:48.360 [2024-12-11T14:07:41.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3260937 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3259129 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3259129 ']' 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3259129 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:48.360 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259129 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259129' 00:26:48.621 killing process with pid 3259129 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3259129 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3259129 00:26:48.621 00:26:48.621 real 0m14.301s 00:26:48.621 user 0m27.398s 00:26:48.621 sys 0m4.567s 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:48.621 ************************************ 00:26:48.621 END TEST nvmf_digest_error 00:26:48.621 ************************************ 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:48.621 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:48.621 rmmod nvme_tcp 00:26:48.896 rmmod nvme_fabrics 00:26:48.896 rmmod nvme_keyring 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3259129 ']' 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3259129 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3259129 ']' 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3259129 00:26:48.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (3259129) - No such process 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3259129 is not found' 00:26:48.896 Process with pid 3259129 is not found 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.896 15:07:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:50.847 00:26:50.847 real 0m36.662s 00:26:50.847 user 0m55.932s 00:26:50.847 sys 0m13.656s 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:50.847 ************************************ 00:26:50.847 END TEST nvmf_digest 00:26:50.847 ************************************ 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.847 ************************************ 00:26:50.847 START TEST nvmf_bdevperf 00:26:50.847 ************************************ 00:26:50.847 15:07:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:51.106 * Looking for test storage... 00:26:51.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:26:51.106 15:07:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:51.106 15:07:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:51.106 15:07:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:51.106 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:51.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.107 --rc genhtml_branch_coverage=1 00:26:51.107 --rc genhtml_function_coverage=1 00:26:51.107 --rc genhtml_legend=1 00:26:51.107 --rc geninfo_all_blocks=1 00:26:51.107 --rc geninfo_unexecuted_blocks=1 00:26:51.107 00:26:51.107 ' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:51.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.107 --rc genhtml_branch_coverage=1 00:26:51.107 --rc genhtml_function_coverage=1 00:26:51.107 --rc genhtml_legend=1 00:26:51.107 --rc geninfo_all_blocks=1 00:26:51.107 --rc geninfo_unexecuted_blocks=1 00:26:51.107 00:26:51.107 ' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:51.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.107 --rc genhtml_branch_coverage=1 00:26:51.107 --rc genhtml_function_coverage=1 00:26:51.107 --rc genhtml_legend=1 00:26:51.107 --rc geninfo_all_blocks=1 00:26:51.107 --rc geninfo_unexecuted_blocks=1 00:26:51.107 00:26:51.107 ' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:51.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:51.107 --rc genhtml_branch_coverage=1 00:26:51.107 --rc genhtml_function_coverage=1 00:26:51.107 --rc genhtml_legend=1 00:26:51.107 --rc geninfo_all_blocks=1 00:26:51.107 --rc geninfo_unexecuted_blocks=1 00:26:51.107 00:26:51.107 ' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:51.107 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:26:51.107 15:07:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:57.677 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:57.677 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:57.677 Found net devices under 0000:86:00.0: cvl_0_0 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:57.677 Found net devices under 0000:86:00.1: cvl_0_1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:57.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:57.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:26:57.677 00:26:57.677 --- 10.0.0.2 ping statistics --- 00:26:57.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.677 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:57.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:57.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:26:57.677 00:26:57.677 --- 10.0.0.1 ping statistics --- 00:26:57.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:57.677 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:57.677 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3264958 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3264958 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3264958 ']' 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:57.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:57.678 15:07:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 [2024-12-11 15:07:50.018930] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:57.678 [2024-12-11 15:07:50.018983] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:57.678 [2024-12-11 15:07:50.103004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:57.678 [2024-12-11 15:07:50.144883] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:57.678 [2024-12-11 15:07:50.144919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:57.678 [2024-12-11 15:07:50.144929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:57.678 [2024-12-11 15:07:50.144936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:57.678 [2024-12-11 15:07:50.144943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:57.678 [2024-12-11 15:07:50.146394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:57.678 [2024-12-11 15:07:50.146501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.678 [2024-12-11 15:07:50.146503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 [2024-12-11 15:07:50.282913] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 Malloc0 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:57.678 [2024-12-11 15:07:50.348013] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:57.678 { 00:26:57.678 "params": { 00:26:57.678 "name": "Nvme$subsystem", 00:26:57.678 "trtype": "$TEST_TRANSPORT", 00:26:57.678 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.678 "adrfam": "ipv4", 00:26:57.678 "trsvcid": "$NVMF_PORT", 00:26:57.678 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.678 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.678 "hdgst": ${hdgst:-false}, 00:26:57.678 "ddgst": ${ddgst:-false} 00:26:57.678 }, 00:26:57.678 "method": "bdev_nvme_attach_controller" 00:26:57.678 } 00:26:57.678 EOF 00:26:57.678 )") 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:57.678 15:07:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:57.678 "params": { 00:26:57.678 "name": "Nvme1", 00:26:57.678 "trtype": "tcp", 00:26:57.678 "traddr": "10.0.0.2", 00:26:57.678 "adrfam": "ipv4", 00:26:57.678 "trsvcid": "4420", 00:26:57.678 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.678 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.678 "hdgst": false, 00:26:57.678 "ddgst": false 00:26:57.678 }, 00:26:57.678 "method": "bdev_nvme_attach_controller" 00:26:57.678 }' 00:26:57.678 [2024-12-11 15:07:50.398653] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:57.678 [2024-12-11 15:07:50.398706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265026 ] 00:26:57.678 [2024-12-11 15:07:50.476886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.678 [2024-12-11 15:07:50.517758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.936 Running I/O for 1 seconds... 00:26:58.868 10992.00 IOPS, 42.94 MiB/s 00:26:58.868 Latency(us) 00:26:58.868 [2024-12-11T14:07:51.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:58.868 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:58.868 Verification LBA range: start 0x0 length 0x4000 00:26:58.868 Nvme1n1 : 1.01 11014.86 43.03 0.00 0.00 11576.93 1524.42 12993.22 00:26:58.868 [2024-12-11T14:07:51.916Z] =================================================================================================================== 00:26:58.868 [2024-12-11T14:07:51.916Z] Total : 11014.86 43.03 0.00 0.00 11576.93 1524.42 12993.22 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3265358 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:26:59.125 { 00:26:59.125 "params": { 00:26:59.125 "name": "Nvme$subsystem", 00:26:59.125 "trtype": "$TEST_TRANSPORT", 00:26:59.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.125 "adrfam": "ipv4", 00:26:59.125 "trsvcid": "$NVMF_PORT", 00:26:59.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.125 "hdgst": ${hdgst:-false}, 00:26:59.125 "ddgst": ${ddgst:-false} 00:26:59.125 }, 00:26:59.125 "method": "bdev_nvme_attach_controller" 00:26:59.125 } 00:26:59.125 EOF 00:26:59.125 )") 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:26:59.125 15:07:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:26:59.125 "params": { 00:26:59.125 "name": "Nvme1", 00:26:59.125 "trtype": "tcp", 00:26:59.125 "traddr": "10.0.0.2", 00:26:59.125 "adrfam": "ipv4", 00:26:59.125 "trsvcid": "4420", 00:26:59.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.125 "hdgst": false, 00:26:59.125 "ddgst": false 00:26:59.125 }, 00:26:59.125 "method": "bdev_nvme_attach_controller" 00:26:59.125 }' 00:26:59.125 [2024-12-11 15:07:52.017724] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:26:59.125 [2024-12-11 15:07:52.017774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265358 ] 00:26:59.125 [2024-12-11 15:07:52.095431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.125 [2024-12-11 15:07:52.132920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.688 Running I/O for 15 seconds... 00:27:01.549 11146.00 IOPS, 43.54 MiB/s [2024-12-11T14:07:55.167Z] 11065.50 IOPS, 43.22 MiB/s [2024-12-11T14:07:55.167Z] 15:07:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3264958 00:27:02.119 15:07:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:02.119 [2024-12-11 15:07:54.993232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:91416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.119 [2024-12-11 15:07:54.993267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.119 [2024-12-11 15:07:54.993286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.119 [2024-12-11 15:07:54.993294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.119 [2024-12-11 15:07:54.993305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.119 [2024-12-11 15:07:54.993318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.119 [2024-12-11 15:07:54.993327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:91480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:91496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:91504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:91512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:91536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:91544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:91576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:91584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:91592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:91624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:91632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.120 [2024-12-11 15:07:54.993734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.120 [2024-12-11 15:07:54.993743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:91648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:91664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:91680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:91688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:91712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:91736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.993985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:91744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.993993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:91752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:91768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:91776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:91784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:91792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:91800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.121 [2024-12-11 15:07:54.994103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.121 [2024-12-11 15:07:54.994111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:91808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:91832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.122 [2024-12-11 15:07:54.994494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.122 [2024-12-11 15:07:54.994503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.123 [2024-12-11 15:07:54.994706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.123 [2024-12-11 15:07:54.994720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.123 [2024-12-11 15:07:54.994853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.123 [2024-12-11 15:07:54.994860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.994986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.994992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:02.124 [2024-12-11 15:07:54.995190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.124 [2024-12-11 15:07:54.995277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.995284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a7510 is same with the state(6) to be set 00:27:02.124 [2024-12-11 15:07:54.995292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:02.124 [2024-12-11 15:07:54.995298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:02.124 [2024-12-11 15:07:54.995304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92360 len:8 PRP1 0x0 PRP2 0x0 00:27:02.124 [2024-12-11 15:07:54.995313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:02.124 [2024-12-11 15:07:54.998237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.124 [2024-12-11 15:07:54.998288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.124 [2024-12-11 15:07:54.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.124 [2024-12-11 15:07:54.998819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.124 [2024-12-11 15:07:54.998828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.124 [2024-12-11 15:07:54.999007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.124 [2024-12-11 15:07:54.999190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.124 [2024-12-11 15:07:54.999199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.124 [2024-12-11 15:07:54.999207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.124 [2024-12-11 15:07:54.999215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.124 [2024-12-11 15:07:55.011562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.124 [2024-12-11 15:07:55.011985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.124 [2024-12-11 15:07:55.012004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.124 [2024-12-11 15:07:55.012012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.124 [2024-12-11 15:07:55.012195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.124 [2024-12-11 15:07:55.012369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.124 [2024-12-11 15:07:55.012377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.124 [2024-12-11 15:07:55.012384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.124 [2024-12-11 15:07:55.012391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.124 [2024-12-11 15:07:55.024524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.124 [2024-12-11 15:07:55.024959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.124 [2024-12-11 15:07:55.024976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.124 [2024-12-11 15:07:55.024983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.124 [2024-12-11 15:07:55.025147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.124 [2024-12-11 15:07:55.025319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.124 [2024-12-11 15:07:55.025327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.124 [2024-12-11 15:07:55.025333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.124 [2024-12-11 15:07:55.025340] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.124 [2024-12-11 15:07:55.037425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.124 [2024-12-11 15:07:55.037691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.124 [2024-12-11 15:07:55.037708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.124 [2024-12-11 15:07:55.037715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.124 [2024-12-11 15:07:55.037879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.124 [2024-12-11 15:07:55.038042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.124 [2024-12-11 15:07:55.038050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.124 [2024-12-11 15:07:55.038056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.124 [2024-12-11 15:07:55.038062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.124 [2024-12-11 15:07:55.050348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.124 [2024-12-11 15:07:55.050649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.050666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.050673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.050845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.051017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.051026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.051032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.051038] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.063193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.063505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.063521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.063528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.063692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.063856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.063864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.063870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.063876] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.076092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.076495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.076512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.076522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.076686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.076850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.076858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.076865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.076872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.088929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.089353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.089370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.089377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.089540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.089703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.089711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.089716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.089722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.102057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.102457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.102474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.102481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.102645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.102828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.102837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.102843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.102849] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.114919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.115333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.115379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.115402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.115984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.116549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.116557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.116563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.116569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.127792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.128217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.128291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.128319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.128904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.129507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.129533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.129556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.129562] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.140689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.141115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.125 [2024-12-11 15:07:55.141131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.125 [2024-12-11 15:07:55.141138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.125 [2024-12-11 15:07:55.141308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.125 [2024-12-11 15:07:55.141471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.125 [2024-12-11 15:07:55.141478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.125 [2024-12-11 15:07:55.141484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.125 [2024-12-11 15:07:55.141490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.125 [2024-12-11 15:07:55.153605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.125 [2024-12-11 15:07:55.154035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.126 [2024-12-11 15:07:55.154051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.126 [2024-12-11 15:07:55.154058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.126 [2024-12-11 15:07:55.154227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.126 [2024-12-11 15:07:55.154391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.126 [2024-12-11 15:07:55.154397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.126 [2024-12-11 15:07:55.154406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.126 [2024-12-11 15:07:55.154412] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.386 [2024-12-11 15:07:55.166567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.386 [2024-12-11 15:07:55.166924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.386 [2024-12-11 15:07:55.166941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.386 [2024-12-11 15:07:55.166949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.386 [2024-12-11 15:07:55.167113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.386 [2024-12-11 15:07:55.167283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.386 [2024-12-11 15:07:55.167291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.386 [2024-12-11 15:07:55.167298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.386 [2024-12-11 15:07:55.167304] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.386 [2024-12-11 15:07:55.179436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.386 [2024-12-11 15:07:55.179888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.386 [2024-12-11 15:07:55.179935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.386 [2024-12-11 15:07:55.179959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.386 [2024-12-11 15:07:55.180561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.386 [2024-12-11 15:07:55.180941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.386 [2024-12-11 15:07:55.180949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.386 [2024-12-11 15:07:55.180955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.386 [2024-12-11 15:07:55.180974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.386 [2024-12-11 15:07:55.194419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.386 [2024-12-11 15:07:55.194952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.386 [2024-12-11 15:07:55.194997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.386 [2024-12-11 15:07:55.195021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.386 [2024-12-11 15:07:55.195590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.386 [2024-12-11 15:07:55.195845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.386 [2024-12-11 15:07:55.195857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.195866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.195875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.207406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.207842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.207888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.207912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.208512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.209007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.209015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.209021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.209027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.220305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.220726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.220742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.220749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.220911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.221074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.221082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.221088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.221094] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.233217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.233626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.233642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.233649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.233812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.233975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.233982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.233988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.233994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.246103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.246548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.246564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.246575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.246747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.246925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.246933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.246939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.246945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.259237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.259659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.259675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.259682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.259883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.260061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.260069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.260076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.260084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.272471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.272912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.272929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.272937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.273139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.273325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.273334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.273340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.273347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.285440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.285877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.285946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.286544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.286996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.287004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.287010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.287016] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.298535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.298940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.298957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.298964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.299137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.299316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.299325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.299331] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.299337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.311377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.311797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.311804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.311967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.312129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.312137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.312143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.387 [2024-12-11 15:07:55.312149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.387 [2024-12-11 15:07:55.324283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.387 [2024-12-11 15:07:55.324679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.387 [2024-12-11 15:07:55.324695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.387 [2024-12-11 15:07:55.324702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.387 [2024-12-11 15:07:55.324866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.387 [2024-12-11 15:07:55.325029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.387 [2024-12-11 15:07:55.325036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.387 [2024-12-11 15:07:55.325046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.325052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.337165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.337580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.337625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.337647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.338115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.338286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.338294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.338300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.338306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.350113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.350485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.350502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.350509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.350681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.350853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.350861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.350867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.350873] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.363050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.363448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.363464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.363471] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.363635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.363797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.363805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.363811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.363817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.376039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.376456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.376472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.376479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.377050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.377219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.377227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.377233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.377238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.388883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.389283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.389329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.389352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.389935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.390413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.390421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.390427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.390433] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.401782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.402194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.402201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.402365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.402528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.402535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.402541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.402547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.414602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.414951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.414967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.414977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.415139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.415309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.415318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.415324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.415330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.388 [2024-12-11 15:07:55.427527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.388 [2024-12-11 15:07:55.427951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.388 [2024-12-11 15:07:55.427996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.388 [2024-12-11 15:07:55.428022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.388 [2024-12-11 15:07:55.428450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.388 [2024-12-11 15:07:55.428624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.388 [2024-12-11 15:07:55.428632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.388 [2024-12-11 15:07:55.428638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.388 [2024-12-11 15:07:55.428644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.440532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.440961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.440979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.440987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.441168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.441342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.441350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.441357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.441363] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.453485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.453882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.453899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.453906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.454069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.454241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.454250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.454255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.454261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 9270.33 IOPS, 36.21 MiB/s [2024-12-11T14:07:55.696Z] [2024-12-11 15:07:55.467551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.467944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.467961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.467968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.468131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.468302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.468310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.468316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.468322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.480366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.480777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.480793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.480800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.480963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.481125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.481133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.481139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.481145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.493252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.493656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.493672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.493679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.493842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.494005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.494013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.494022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.494028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.506150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.506548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.506565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.506572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.506744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.506916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.648 [2024-12-11 15:07:55.506924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.648 [2024-12-11 15:07:55.506930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.648 [2024-12-11 15:07:55.506936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.648 [2024-12-11 15:07:55.519350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.648 [2024-12-11 15:07:55.519756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.648 [2024-12-11 15:07:55.519773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.648 [2024-12-11 15:07:55.519780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.648 [2024-12-11 15:07:55.519952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.648 [2024-12-11 15:07:55.520125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.520133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.520139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.520145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.532332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.532735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.532751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.532758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.532930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.533104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.533112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.533118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.533125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.545227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.545556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.545572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.545579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.545741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.545904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.545912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.545917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.545923] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.558074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.558470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.558486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.558493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.558656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.558819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.558827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.558833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.558839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.570957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.571352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.571369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.571376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.571539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.571702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.571709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.571715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.571721] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.583778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.584185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.584229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.584261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.584773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.584936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.584944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.584949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.584955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.596601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.597002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.597020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.597027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.597196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.597359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.597367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.597373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.597379] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.609524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.609921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.609937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.609944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.610107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.610277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.610285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.610291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.610297] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.622591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.622995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.623039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.623062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.623604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.623771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.623779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.623785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.623791] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.635473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.635882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.635899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.635906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.636069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.636242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.636251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.649 [2024-12-11 15:07:55.636257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.649 [2024-12-11 15:07:55.636263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.649 [2024-12-11 15:07:55.648310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.649 [2024-12-11 15:07:55.648667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.649 [2024-12-11 15:07:55.648684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.649 [2024-12-11 15:07:55.648690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.649 [2024-12-11 15:07:55.648854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.649 [2024-12-11 15:07:55.649017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.649 [2024-12-11 15:07:55.649025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.650 [2024-12-11 15:07:55.649031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.650 [2024-12-11 15:07:55.649037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.650 [2024-12-11 15:07:55.661246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.650 [2024-12-11 15:07:55.661697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.650 [2024-12-11 15:07:55.661741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.650 [2024-12-11 15:07:55.661764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.650 [2024-12-11 15:07:55.662165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.650 [2024-12-11 15:07:55.662329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.650 [2024-12-11 15:07:55.662336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.650 [2024-12-11 15:07:55.662346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.650 [2024-12-11 15:07:55.662352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.650 [2024-12-11 15:07:55.674161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.650 [2024-12-11 15:07:55.674518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.650 [2024-12-11 15:07:55.674562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.650 [2024-12-11 15:07:55.674584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.650 [2024-12-11 15:07:55.675073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.650 [2024-12-11 15:07:55.675253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.650 [2024-12-11 15:07:55.675262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.650 [2024-12-11 15:07:55.675269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.650 [2024-12-11 15:07:55.675275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.650 [2024-12-11 15:07:55.686981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.650 [2024-12-11 15:07:55.687379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.650 [2024-12-11 15:07:55.687394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.650 [2024-12-11 15:07:55.687401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.650 [2024-12-11 15:07:55.687564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.650 [2024-12-11 15:07:55.687727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.650 [2024-12-11 15:07:55.687735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.650 [2024-12-11 15:07:55.687741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.650 [2024-12-11 15:07:55.687746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.909 [2024-12-11 15:07:55.700094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.909 [2024-12-11 15:07:55.700513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.909 [2024-12-11 15:07:55.700531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.909 [2024-12-11 15:07:55.700539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.909 [2024-12-11 15:07:55.700702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.909 [2024-12-11 15:07:55.700869] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.909 [2024-12-11 15:07:55.700877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.909 [2024-12-11 15:07:55.700883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.909 [2024-12-11 15:07:55.700889] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.909 [2024-12-11 15:07:55.713055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.909 [2024-12-11 15:07:55.713468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.909 [2024-12-11 15:07:55.713516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.909 [2024-12-11 15:07:55.713540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.909 [2024-12-11 15:07:55.714039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.909 [2024-12-11 15:07:55.714209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.909 [2024-12-11 15:07:55.714217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.909 [2024-12-11 15:07:55.714224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.909 [2024-12-11 15:07:55.714230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.909 [2024-12-11 15:07:55.725893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.909 [2024-12-11 15:07:55.726288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.909 [2024-12-11 15:07:55.726305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.909 [2024-12-11 15:07:55.726312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.909 [2024-12-11 15:07:55.726476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.909 [2024-12-11 15:07:55.726639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.909 [2024-12-11 15:07:55.726646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.909 [2024-12-11 15:07:55.726652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.909 [2024-12-11 15:07:55.726658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.909 [2024-12-11 15:07:55.738763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.909 [2024-12-11 15:07:55.739152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.909 [2024-12-11 15:07:55.739173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.909 [2024-12-11 15:07:55.739180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.909 [2024-12-11 15:07:55.739343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.909 [2024-12-11 15:07:55.739506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.909 [2024-12-11 15:07:55.739513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.909 [2024-12-11 15:07:55.739520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.909 [2024-12-11 15:07:55.739525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.909 [2024-12-11 15:07:55.751625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.909 [2024-12-11 15:07:55.752040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.909 [2024-12-11 15:07:55.752068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.909 [2024-12-11 15:07:55.752078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.909 [2024-12-11 15:07:55.752247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.909 [2024-12-11 15:07:55.752416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.909 [2024-12-11 15:07:55.752424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.909 [2024-12-11 15:07:55.752430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.909 [2024-12-11 15:07:55.752436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.764536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.764955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.764972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.764979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.765151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.765331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.765340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.765346] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.765352] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.777662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.778078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.778094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.778102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.778285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.778463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.778471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.778478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.778484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.790678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.791112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.791128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.791135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.791316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.791494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.791502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.791508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.791514] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.803606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.804016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.804033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.804040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.804221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.804393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.804401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.804408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.804414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.816417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.816797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.816814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.816821] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.816983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.817146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.817153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.817166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.817172] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.829314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.829688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.829721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.829747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.830317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.830481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.830488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.830498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.830504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.842414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.842818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.842834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.842841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.843014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.843194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.843202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.843209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.843215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.855216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.855610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.855626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.855633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.855795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.855958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.855965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.855971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.855976] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.868079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.868459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.868475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.868481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.868644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.868806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.868813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.868819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.868825] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.880880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.881253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.910 [2024-12-11 15:07:55.881269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.910 [2024-12-11 15:07:55.881276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.910 [2024-12-11 15:07:55.881439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.910 [2024-12-11 15:07:55.881602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.910 [2024-12-11 15:07:55.881610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.910 [2024-12-11 15:07:55.881615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.910 [2024-12-11 15:07:55.881622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.910 [2024-12-11 15:07:55.893730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.910 [2024-12-11 15:07:55.894122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.911 [2024-12-11 15:07:55.894138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.911 [2024-12-11 15:07:55.894145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.911 [2024-12-11 15:07:55.894316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.911 [2024-12-11 15:07:55.894479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.911 [2024-12-11 15:07:55.894486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.911 [2024-12-11 15:07:55.894492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.911 [2024-12-11 15:07:55.894498] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.911 [2024-12-11 15:07:55.906694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.911 [2024-12-11 15:07:55.907088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.911 [2024-12-11 15:07:55.907104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.911 [2024-12-11 15:07:55.907111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.911 [2024-12-11 15:07:55.907280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.911 [2024-12-11 15:07:55.907445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.911 [2024-12-11 15:07:55.907452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.911 [2024-12-11 15:07:55.907458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.911 [2024-12-11 15:07:55.907464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.911 [2024-12-11 15:07:55.919574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.911 [2024-12-11 15:07:55.919893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.911 [2024-12-11 15:07:55.919909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.911 [2024-12-11 15:07:55.919919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.911 [2024-12-11 15:07:55.920082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.911 [2024-12-11 15:07:55.920252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.911 [2024-12-11 15:07:55.920260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.911 [2024-12-11 15:07:55.920266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.911 [2024-12-11 15:07:55.920272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.911 [2024-12-11 15:07:55.932395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.911 [2024-12-11 15:07:55.932780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.911 [2024-12-11 15:07:55.932824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.911 [2024-12-11 15:07:55.932848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.911 [2024-12-11 15:07:55.933315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.911 [2024-12-11 15:07:55.933480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.911 [2024-12-11 15:07:55.933487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.911 [2024-12-11 15:07:55.933493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.911 [2024-12-11 15:07:55.933499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:02.911 [2024-12-11 15:07:55.945299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:02.911 [2024-12-11 15:07:55.945621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.911 [2024-12-11 15:07:55.945637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:02.911 [2024-12-11 15:07:55.945644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:02.911 [2024-12-11 15:07:55.945807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:02.911 [2024-12-11 15:07:55.945971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:02.911 [2024-12-11 15:07:55.945978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:02.911 [2024-12-11 15:07:55.945984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:02.911 [2024-12-11 15:07:55.945990] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:55.958263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.170 [2024-12-11 15:07:55.958694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.170 [2024-12-11 15:07:55.958714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.170 [2024-12-11 15:07:55.958723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.170 [2024-12-11 15:07:55.958902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.170 [2024-12-11 15:07:55.959087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.170 [2024-12-11 15:07:55.959096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.170 [2024-12-11 15:07:55.959104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.170 [2024-12-11 15:07:55.959110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:55.971070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.170 [2024-12-11 15:07:55.971478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.170 [2024-12-11 15:07:55.971495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.170 [2024-12-11 15:07:55.971502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.170 [2024-12-11 15:07:55.971665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.170 [2024-12-11 15:07:55.971829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.170 [2024-12-11 15:07:55.971837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.170 [2024-12-11 15:07:55.971844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.170 [2024-12-11 15:07:55.971852] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:55.983925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.170 [2024-12-11 15:07:55.984271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.170 [2024-12-11 15:07:55.984288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.170 [2024-12-11 15:07:55.984295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.170 [2024-12-11 15:07:55.984458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.170 [2024-12-11 15:07:55.984622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.170 [2024-12-11 15:07:55.984630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.170 [2024-12-11 15:07:55.984636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.170 [2024-12-11 15:07:55.984643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:55.996868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.170 [2024-12-11 15:07:55.997239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.170 [2024-12-11 15:07:55.997255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.170 [2024-12-11 15:07:55.997262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.170 [2024-12-11 15:07:55.997424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.170 [2024-12-11 15:07:55.997587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.170 [2024-12-11 15:07:55.997595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.170 [2024-12-11 15:07:55.997607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.170 [2024-12-11 15:07:55.997613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:56.009925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.170 [2024-12-11 15:07:56.010294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.170 [2024-12-11 15:07:56.010313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.170 [2024-12-11 15:07:56.010321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.170 [2024-12-11 15:07:56.010484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.170 [2024-12-11 15:07:56.010650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.170 [2024-12-11 15:07:56.010661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.170 [2024-12-11 15:07:56.010668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.170 [2024-12-11 15:07:56.010675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.170 [2024-12-11 15:07:56.022778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.023231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.023249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.023256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.023429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.023603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.023611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.023618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.023625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.035912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.036280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.036299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.036306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.036484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.036663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.036672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.036679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.036685] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.049009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.049346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.049363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.049370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.049543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.049716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.049724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.049731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.049737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.062107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.062565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.062609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.062631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.063054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.063223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.063231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.063238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.063243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.074995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.075387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.075456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.075938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.076112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.076120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.076127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.076134] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.088016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.088401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.088418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.088428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.088601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.088774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.088782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.088788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.088794] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.101001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.101360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.101377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.101384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.101560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.101735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.101743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.101749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.101756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.113838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.114105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.114121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.114128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.114300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.114464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.114472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.114478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.114484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.126656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.127068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.127084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.127118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.127718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.128214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.128223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.128229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.128235] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.171 [2024-12-11 15:07:56.139599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.171 [2024-12-11 15:07:56.139881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.171 [2024-12-11 15:07:56.139897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.171 [2024-12-11 15:07:56.139904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.171 [2024-12-11 15:07:56.140067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.171 [2024-12-11 15:07:56.140239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.171 [2024-12-11 15:07:56.140247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.171 [2024-12-11 15:07:56.140254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.171 [2024-12-11 15:07:56.140261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.172 [2024-12-11 15:07:56.152478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.172 [2024-12-11 15:07:56.152836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.172 [2024-12-11 15:07:56.152852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.172 [2024-12-11 15:07:56.152859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.172 [2024-12-11 15:07:56.153022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.172 [2024-12-11 15:07:56.153193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.172 [2024-12-11 15:07:56.153201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.172 [2024-12-11 15:07:56.153208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.172 [2024-12-11 15:07:56.153214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.172 [2024-12-11 15:07:56.165355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.172 [2024-12-11 15:07:56.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.172 [2024-12-11 15:07:56.165691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.172 [2024-12-11 15:07:56.165698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.172 [2024-12-11 15:07:56.165861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.172 [2024-12-11 15:07:56.166024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.172 [2024-12-11 15:07:56.166031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.172 [2024-12-11 15:07:56.166041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.172 [2024-12-11 15:07:56.166047] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.172 [2024-12-11 15:07:56.178310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.172 [2024-12-11 15:07:56.178657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.172 [2024-12-11 15:07:56.178672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.172 [2024-12-11 15:07:56.178679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.172 [2024-12-11 15:07:56.178843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.172 [2024-12-11 15:07:56.179006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.172 [2024-12-11 15:07:56.179014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.172 [2024-12-11 15:07:56.179020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.172 [2024-12-11 15:07:56.179026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.172 [2024-12-11 15:07:56.191316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.172 [2024-12-11 15:07:56.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.172 [2024-12-11 15:07:56.191761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.172 [2024-12-11 15:07:56.191785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.172 [2024-12-11 15:07:56.192188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.172 [2024-12-11 15:07:56.192370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.172 [2024-12-11 15:07:56.192378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.172 [2024-12-11 15:07:56.192384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.172 [2024-12-11 15:07:56.192390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.172 [2024-12-11 15:07:56.204243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.172 [2024-12-11 15:07:56.204545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.172 [2024-12-11 15:07:56.204562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.172 [2024-12-11 15:07:56.204569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.172 [2024-12-11 15:07:56.204742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.172 [2024-12-11 15:07:56.204914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.172 [2024-12-11 15:07:56.204922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.172 [2024-12-11 15:07:56.204929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.172 [2024-12-11 15:07:56.204935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.431 [2024-12-11 15:07:56.217261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.431 [2024-12-11 15:07:56.217607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.431 [2024-12-11 15:07:56.217626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.431 [2024-12-11 15:07:56.217636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.431 [2024-12-11 15:07:56.217801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.431 [2024-12-11 15:07:56.217964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.431 [2024-12-11 15:07:56.217972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.431 [2024-12-11 15:07:56.217978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.431 [2024-12-11 15:07:56.217984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.431 [2024-12-11 15:07:56.230162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.431 [2024-12-11 15:07:56.230584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.431 [2024-12-11 15:07:56.230603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.431 [2024-12-11 15:07:56.230610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.431 [2024-12-11 15:07:56.230788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.431 [2024-12-11 15:07:56.230967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.431 [2024-12-11 15:07:56.230975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.431 [2024-12-11 15:07:56.230982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.431 [2024-12-11 15:07:56.230989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.431 [2024-12-11 15:07:56.243069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.431 [2024-12-11 15:07:56.243450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.431 [2024-12-11 15:07:56.243496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.431 [2024-12-11 15:07:56.243519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.431 [2024-12-11 15:07:56.244048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.431 [2024-12-11 15:07:56.244218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.431 [2024-12-11 15:07:56.244226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.244232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.244238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.256002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.256313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.256329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.256339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.256511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.256683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.256691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.256697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.256704] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.268804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.269181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.269198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.269204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.269367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.269530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.269537] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.269543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.269549] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.281622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.281915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.281932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.281939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.282111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.282292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.282301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.282308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.282314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.294774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.295073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.295089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.295097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.295280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.295472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.295480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.295487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.295493] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.307860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.308213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.308230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.308238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.308410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.308583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.308591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.308597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.308604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.320807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.321214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.321238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.321402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.321566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.321574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.321580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.321586] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.333673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.334082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.334127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.334150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.334684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.334848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.334856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.334865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.334871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.346619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.346965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.346981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.346988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.347150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.347321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.347329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.347335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.347341] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.359571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.359992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.360009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.360016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.360194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.360366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.360374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.360381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.360387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.432 [2024-12-11 15:07:56.372549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.432 [2024-12-11 15:07:56.372916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.432 [2024-12-11 15:07:56.372932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.432 [2024-12-11 15:07:56.372939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.432 [2024-12-11 15:07:56.373102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.432 [2024-12-11 15:07:56.373274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.432 [2024-12-11 15:07:56.373283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.432 [2024-12-11 15:07:56.373289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.432 [2024-12-11 15:07:56.373295] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.385623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.386013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.386030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.386038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.386221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.386398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.386405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.386412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.386418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.398717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.399126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.399142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.399149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.399328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.399502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.399509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.399516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.399522] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.411728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.412162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.412180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.412187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.412360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.412532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.412540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.412547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.412553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.424624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.425027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.425071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.425101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.425554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.425718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.425726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.425732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.425738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.437543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.437937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.437953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.437960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.438123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.438294] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.438303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.438308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.438314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.450430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.450772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.450817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.450840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.451439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.451940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.451949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.451955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.451961] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 [2024-12-11 15:07:56.463229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.433 [2024-12-11 15:07:56.463647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.433 [2024-12-11 15:07:56.463663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.433 [2024-12-11 15:07:56.463670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.433 [2024-12-11 15:07:56.463833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.433 [2024-12-11 15:07:56.463998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.433 [2024-12-11 15:07:56.464006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.433 [2024-12-11 15:07:56.464012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.433 [2024-12-11 15:07:56.464018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.433 6952.75 IOPS, 27.16 MiB/s [2024-12-11T14:07:56.481Z] [2024-12-11 15:07:56.476297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.476682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.476701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.476710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.476888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.477067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.477075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.477082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.477089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.489236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.489665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.489682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.489689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.489853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.490016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.490023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.490029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.490035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.502171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.502601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.502617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.502624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.502797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.502980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.502987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.502997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.503004] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.515038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.515393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.515409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.515416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.515579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.515741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.515749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.515755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.515761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.527845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.528243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.528260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.528267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.528430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.528593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.528601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.528607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.528613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.540730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.541174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.541191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.541199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.541372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.541544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.541552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.541558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.541565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.553925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.554363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.554380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.554388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.554566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.554751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.554759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.554765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.554772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.566925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.567359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.567376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.567383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.567555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.567731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.567739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.567745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.567752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.579866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.580301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.580346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.580369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.580950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.581146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.693 [2024-12-11 15:07:56.581154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.693 [2024-12-11 15:07:56.581168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.693 [2024-12-11 15:07:56.581174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.693 [2024-12-11 15:07:56.593007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.693 [2024-12-11 15:07:56.593396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.693 [2024-12-11 15:07:56.593414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.693 [2024-12-11 15:07:56.593425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.693 [2024-12-11 15:07:56.593598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.693 [2024-12-11 15:07:56.593771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.593779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.593786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.593793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.605945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.606401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.606446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.606470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.607050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.607418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.607426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.607432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.607438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.618783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.619201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.619217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.619224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.619387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.619549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.619557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.619563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.619569] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.631742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.632177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.632224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.632247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.632828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.633360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.633369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.633375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.633381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.644566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.644999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.645044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.645067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.645663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.646262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.646290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.646296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.646302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.657525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.657960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.658004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.658027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.658428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.658592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.658600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.658606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.658611] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.670507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.670942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.670985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.671008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.671428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.671592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.671600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.671610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.671616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.683364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.683710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.683726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.683733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.683895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.684059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.684066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.684073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.684079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.696190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.696595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.696639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.696662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.697138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.697307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.697316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.697322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.697328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.711187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.711717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.711761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.711785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.712251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.712507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.712518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.712527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.694 [2024-12-11 15:07:56.712536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.694 [2024-12-11 15:07:56.724077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.694 [2024-12-11 15:07:56.724513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.694 [2024-12-11 15:07:56.724530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.694 [2024-12-11 15:07:56.724537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.694 [2024-12-11 15:07:56.724704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.694 [2024-12-11 15:07:56.724871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.694 [2024-12-11 15:07:56.724879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.694 [2024-12-11 15:07:56.724885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.695 [2024-12-11 15:07:56.724891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.695 [2024-12-11 15:07:56.737315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.737811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.737832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.737841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.738021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.738207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.738216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.738223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.738230] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.750329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.750763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.750780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.750787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.750950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.751113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.751120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.751126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.751133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.763198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.763635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.763680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.763712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.764312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.764801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.764809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.764815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.764821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.776007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.776442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.776488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.776511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.777093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.777692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.777700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.777707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.777712] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.791121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.791641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.791663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.791673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.791927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.792188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.792200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.792209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.792218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.804223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.804578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.954 [2024-12-11 15:07:56.804595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.954 [2024-12-11 15:07:56.804602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.954 [2024-12-11 15:07:56.804794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.954 [2024-12-11 15:07:56.804976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.954 [2024-12-11 15:07:56.804984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.954 [2024-12-11 15:07:56.804991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.954 [2024-12-11 15:07:56.804998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.954 [2024-12-11 15:07:56.817256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.954 [2024-12-11 15:07:56.817611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.817627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.817635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.817807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.817983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.817992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.817998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.818005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.830125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.830557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.830603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.830626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.831224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.831729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.831736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.831742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.831748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.843007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.843431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.843447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.843454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.843616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.843779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.843787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.843796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.843802] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.855802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.856226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.856243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.856250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.856422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.856594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.856602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.856608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.856614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.868675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.869096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.869112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.869119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.869309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.869481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.869489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.869495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.869502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.881613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.881982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.882026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.882049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.882646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.883093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.883100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.883106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.883112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.894405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.894832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.894876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.894899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.895497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.896037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.896044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.896050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.896056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.907518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.907873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.907890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.907897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.908069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.908247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.908255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.908262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.908268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.920353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.920756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.920800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.920823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.921419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.921983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.921990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.921996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.922002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.933204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.933625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.933641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.933651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.955 [2024-12-11 15:07:56.933814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.955 [2024-12-11 15:07:56.933976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.955 [2024-12-11 15:07:56.933984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.955 [2024-12-11 15:07:56.933990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.955 [2024-12-11 15:07:56.933996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.955 [2024-12-11 15:07:56.946112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.955 [2024-12-11 15:07:56.946528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.955 [2024-12-11 15:07:56.946545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.955 [2024-12-11 15:07:56.946551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.956 [2024-12-11 15:07:56.946714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.956 [2024-12-11 15:07:56.946877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.956 [2024-12-11 15:07:56.946885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.956 [2024-12-11 15:07:56.946891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.956 [2024-12-11 15:07:56.946896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.956 [2024-12-11 15:07:56.958906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.956 [2024-12-11 15:07:56.959326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.956 [2024-12-11 15:07:56.959342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.956 [2024-12-11 15:07:56.959349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.956 [2024-12-11 15:07:56.959512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.956 [2024-12-11 15:07:56.959675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.956 [2024-12-11 15:07:56.959683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.956 [2024-12-11 15:07:56.959689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.956 [2024-12-11 15:07:56.959694] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.956 [2024-12-11 15:07:56.971810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.956 [2024-12-11 15:07:56.972236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.956 [2024-12-11 15:07:56.972280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.956 [2024-12-11 15:07:56.972303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.956 [2024-12-11 15:07:56.972884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.956 [2024-12-11 15:07:56.973339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.956 [2024-12-11 15:07:56.973347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.956 [2024-12-11 15:07:56.973353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.956 [2024-12-11 15:07:56.973359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.956 [2024-12-11 15:07:56.984622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.956 [2024-12-11 15:07:56.985043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.956 [2024-12-11 15:07:56.985059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.956 [2024-12-11 15:07:56.985066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.956 [2024-12-11 15:07:56.985254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.956 [2024-12-11 15:07:56.985426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.956 [2024-12-11 15:07:56.985434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.956 [2024-12-11 15:07:56.985441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.956 [2024-12-11 15:07:56.985447] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:03.956 [2024-12-11 15:07:56.997699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:03.956 [2024-12-11 15:07:56.998138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.956 [2024-12-11 15:07:56.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:03.956 [2024-12-11 15:07:56.998170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:03.956 [2024-12-11 15:07:56.998343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:03.956 [2024-12-11 15:07:56.998515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:03.956 [2024-12-11 15:07:56.998523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:03.956 [2024-12-11 15:07:56.998530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:03.956 [2024-12-11 15:07:56.998539] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.010744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.011147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.011169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.011177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.011341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.011505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.011513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.011522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.011529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.023659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.024078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.024095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.024102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.024271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.024435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.024443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.024449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.024454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.036519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.036933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.036940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.037103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.037273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.037281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.037287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.037293] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.049410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.049821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.049838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.049845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.050017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.050198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.050207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.050214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.050221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.062560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.062976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.062993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.063000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.063179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.063352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.063360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.063367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.063373] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.075358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.075801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.075818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.075825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.075997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.076177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.076186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.076192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.076198] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.088253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.088663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.088680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.088687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.088850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.089012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.089020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.089026] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.089032] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.101138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.101556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.101572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.101582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.101745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.101908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.101916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.101922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.101928] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.113989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.114406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.114450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.114473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.115056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.216 [2024-12-11 15:07:57.115563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.216 [2024-12-11 15:07:57.115571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.216 [2024-12-11 15:07:57.115577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.216 [2024-12-11 15:07:57.115582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.216 [2024-12-11 15:07:57.126944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.216 [2024-12-11 15:07:57.127362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.216 [2024-12-11 15:07:57.127379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.216 [2024-12-11 15:07:57.127386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.216 [2024-12-11 15:07:57.127548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.127712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.127719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.127725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.127731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.139839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.140257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.140273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.140280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.140443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.140609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.140617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.140623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.140629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.152746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.153184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.153229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.153252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.153833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.154434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.154461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.154481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.154501] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.165668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.166005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.166048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.166071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.166519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.166683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.166690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.166696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.166702] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.178503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.178947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.178954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.179116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.179285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.179294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.179302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.179309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.191413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.191825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.191841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.191848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.192011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.192179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.192188] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.192194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.192200] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.204365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.204700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.204716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.204723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.204887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.205049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.205057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.205063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.205069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.217280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.217563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.217579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.217587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.217750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.217914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.217922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.217928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.217935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.230228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.230655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.230700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.230724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.231323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.231730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.231737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.231744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.231751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.243261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.243663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.243680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.243687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.243860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.244033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.244041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.217 [2024-12-11 15:07:57.244048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.217 [2024-12-11 15:07:57.244054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.217 [2024-12-11 15:07:57.256184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.217 [2024-12-11 15:07:57.256637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.217 [2024-12-11 15:07:57.256654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.217 [2024-12-11 15:07:57.256661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.217 [2024-12-11 15:07:57.256848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.217 [2024-12-11 15:07:57.257046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.217 [2024-12-11 15:07:57.257057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.218 [2024-12-11 15:07:57.257064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.218 [2024-12-11 15:07:57.257071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.269177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.269519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.269567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.269605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.270071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.270258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.270267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.270273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.270280] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.282111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.282496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.282512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.282520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.282683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.282849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.282857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.282863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.282870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.294998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.295430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.295475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.295498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.295901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.296064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.296072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.296078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.296084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.307908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.308353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.308370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.308378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.308550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.308726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.308734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.308740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.308747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.320997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.321427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.321444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.321451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.321623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.321796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.321804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.321810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.321816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.334024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.334388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.334405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.334412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.334585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.334758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.334766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.334772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.334778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.346856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.347278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.347301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.347464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.347626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.347634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.347644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.347650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.359771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.360189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.360205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.360212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.360375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.360538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.360545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.360551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.476 [2024-12-11 15:07:57.360557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.476 [2024-12-11 15:07:57.372676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.476 [2024-12-11 15:07:57.373008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.476 [2024-12-11 15:07:57.373025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.476 [2024-12-11 15:07:57.373032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.476 [2024-12-11 15:07:57.373201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.476 [2024-12-11 15:07:57.373364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.476 [2024-12-11 15:07:57.373372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.476 [2024-12-11 15:07:57.373378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.373383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.385506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.385898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.385914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.385921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.386084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.386253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.386262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.386268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.386273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.398392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.398800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.398807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.398969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.399132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.399139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.399145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.399151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.411218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.411637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.411653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.411660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.411823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.411985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.411993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.411999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.412005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.424137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.424498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.424515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.424522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.424685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.424847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.424855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.424861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.424867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.437174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.437510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.437527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.437537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.437701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.437863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.437871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.437877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.437883] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.450128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.450461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.450478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.450485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.450657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.450830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.450838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.450847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.450854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.462938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.463345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.463392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.463415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.463998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.464466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.464475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.464481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.464487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 5562.20 IOPS, 21.73 MiB/s [2024-12-11T14:07:57.525Z] [2024-12-11 15:07:57.479749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.480320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.480367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.480390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.480973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.481510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.481522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.481531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.481540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.492673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.493064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.493082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.493089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.493263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.493431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.493439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.493445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.493452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.477 [2024-12-11 15:07:57.505590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.477 [2024-12-11 15:07:57.505986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.477 [2024-12-11 15:07:57.506040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.477 [2024-12-11 15:07:57.506064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.477 [2024-12-11 15:07:57.506605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.477 [2024-12-11 15:07:57.506779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.477 [2024-12-11 15:07:57.506787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.477 [2024-12-11 15:07:57.506793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.477 [2024-12-11 15:07:57.506799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.478 [2024-12-11 15:07:57.518618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.478 [2024-12-11 15:07:57.518969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.478 [2024-12-11 15:07:57.518986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.478 [2024-12-11 15:07:57.518994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.478 [2024-12-11 15:07:57.519173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.478 [2024-12-11 15:07:57.519347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.478 [2024-12-11 15:07:57.519355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.478 [2024-12-11 15:07:57.519368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.478 [2024-12-11 15:07:57.519377] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.531576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.532000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.532048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.532072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.532550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.532715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.532723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.532729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.532735] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.544424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.544800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.544816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.544823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.544986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.545150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.545164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.545171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.545177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.557262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.557593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.557637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.557660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.558248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.558434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.558442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.558449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.558455] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.570335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.570615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.570632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.570639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.570817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.570996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.571004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.571010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.571017] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.583409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.583855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.583900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.583923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.584369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.584542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.584551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.584557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.584563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.596481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.596906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.596923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.596931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.597095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.597263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.597272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.597279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.597285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.609464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.609868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.609884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.609895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.610058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.610229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.610237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.610243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.610250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.622470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.622891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.622908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.622915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.623103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.623292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.623301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.623308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.623315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.635382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.635661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.635706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.737 [2024-12-11 15:07:57.635728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.737 [2024-12-11 15:07:57.636324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.737 [2024-12-11 15:07:57.636821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.737 [2024-12-11 15:07:57.636829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.737 [2024-12-11 15:07:57.636835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.737 [2024-12-11 15:07:57.636841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.737 [2024-12-11 15:07:57.648210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.737 [2024-12-11 15:07:57.648609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.737 [2024-12-11 15:07:57.648625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.648632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.648795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.648962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.648970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.648976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.648981] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.661050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.661471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.661516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.661539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.662119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.662720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.662747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.662770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.662777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.673949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.674321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.674338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.674345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.674508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.674671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.674679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.674685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.674691] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.687048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.687479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.687496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.687504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.687682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.687860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.687869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.687879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.687885] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.700063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.700359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.700376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.700383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.700574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.700752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.700760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.700767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.700773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.713188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.713488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.713505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.713513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.713691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.713870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.713878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.713884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.713891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.726302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.726707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.726724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.726731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.726903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.727077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.727085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.727091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.727098] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.739142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.739493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.739509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.739516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.739678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.739842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.739850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.739856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.739862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.751997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.752412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.752458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.752481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.752971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.753145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.753153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.753168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.753174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.764813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.765155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.765176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.765183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.738 [2024-12-11 15:07:57.765347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.738 [2024-12-11 15:07:57.765510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.738 [2024-12-11 15:07:57.765518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.738 [2024-12-11 15:07:57.765523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.738 [2024-12-11 15:07:57.765529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.738 [2024-12-11 15:07:57.777699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.738 [2024-12-11 15:07:57.778169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.738 [2024-12-11 15:07:57.778190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.738 [2024-12-11 15:07:57.778202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.739 [2024-12-11 15:07:57.778381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.739 [2024-12-11 15:07:57.778560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.739 [2024-12-11 15:07:57.778569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.739 [2024-12-11 15:07:57.778576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.739 [2024-12-11 15:07:57.778583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.998 [2024-12-11 15:07:57.790599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.998 [2024-12-11 15:07:57.790955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-12-11 15:07:57.790971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.998 [2024-12-11 15:07:57.790979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.998 [2024-12-11 15:07:57.791142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.998 [2024-12-11 15:07:57.791336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.998 [2024-12-11 15:07:57.791345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.998 [2024-12-11 15:07:57.791352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.998 [2024-12-11 15:07:57.791358] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.998 [2024-12-11 15:07:57.803540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.998 [2024-12-11 15:07:57.803962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-12-11 15:07:57.804008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.998 [2024-12-11 15:07:57.804032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.998 [2024-12-11 15:07:57.804626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.998 [2024-12-11 15:07:57.805171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.998 [2024-12-11 15:07:57.805180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.998 [2024-12-11 15:07:57.805186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.998 [2024-12-11 15:07:57.805192] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.998 [2024-12-11 15:07:57.816340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.998 [2024-12-11 15:07:57.816709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-12-11 15:07:57.816726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.998 [2024-12-11 15:07:57.816733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.998 [2024-12-11 15:07:57.816906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.998 [2024-12-11 15:07:57.817083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.998 [2024-12-11 15:07:57.817092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.998 [2024-12-11 15:07:57.817099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.998 [2024-12-11 15:07:57.817105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.998 [2024-12-11 15:07:57.829549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.998 [2024-12-11 15:07:57.829983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.998 [2024-12-11 15:07:57.830000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.998 [2024-12-11 15:07:57.830008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.830190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.830369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.830378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.830384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.830391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.842591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.843009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.843053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.843076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.843526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.843699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.843707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.843714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.843720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.855507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.855890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.855933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.855955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.856378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.856551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.856559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.856569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.856576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.868329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.868740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.868755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.868762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.868925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.869089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.869096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.869102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.869108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.881308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.881701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.881717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.881724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.881886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.882049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.882056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.882062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.882068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.894189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.894587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.894604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.894611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.894774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.894937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.894944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.894950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.894956] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.907078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.907433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.907500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.908082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.908622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.908639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.908653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.908667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.922267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.922686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.922708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.922718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.922972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.923235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.923247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.923257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.923266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.935331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.935738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.935755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.935762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.935935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.936107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.936115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.936121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.936127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.948129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.948523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.999 [2024-12-11 15:07:57.948539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:04.999 [2024-12-11 15:07:57.948550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:04.999 [2024-12-11 15:07:57.948712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:04.999 [2024-12-11 15:07:57.948876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:04.999 [2024-12-11 15:07:57.948884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:04.999 [2024-12-11 15:07:57.948890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:04.999 [2024-12-11 15:07:57.948896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:04.999 [2024-12-11 15:07:57.961064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:04.999 [2024-12-11 15:07:57.961446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:57.961462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:57.961469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:57.961641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:57.961813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:57.961821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:57.961827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:57.961833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 [2024-12-11 15:07:57.973863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:57.974259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:57.974275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:57.974282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:57.974445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:57.974608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:57.974616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:57.974622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:57.974627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh: line 35: 3264958 Killed "${NVMF_APP[@]}" "$@" 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.000 [2024-12-11 15:07:57.986916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:57.987343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:57.987360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:57.987367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:57.987545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:57.987726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:57.987735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:57.987742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:57.987748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3266355 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3266355 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3266355 ']' 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.000 15:07:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:05.000 [2024-12-11 15:07:57.999968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:58.000403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:58.000421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:58.000428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:58.000606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:58.000783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:58.000791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:58.000798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:58.000805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 [2024-12-11 15:07:58.013033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:58.013476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:58.013494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:58.013502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:58.013684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:58.013863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:58.013871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:58.013878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:58.013884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 [2024-12-11 15:07:58.026121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:58.026570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:58.026587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:58.026594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:58.026767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:58.026941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:58.026949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:58.026955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:58.026962] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 [2024-12-11 15:07:58.039202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.000 [2024-12-11 15:07:58.039620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.000 [2024-12-11 15:07:58.039639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.000 [2024-12-11 15:07:58.039648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.000 [2024-12-11 15:07:58.039826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.000 [2024-12-11 15:07:58.040004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.000 [2024-12-11 15:07:58.040012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.000 [2024-12-11 15:07:58.040019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.000 [2024-12-11 15:07:58.040026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.000 [2024-12-11 15:07:58.042617] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:05.000 [2024-12-11 15:07:58.042656] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.260 [2024-12-11 15:07:58.052385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.052822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.052839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.052851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.053024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.053205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.053214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.053221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.053228] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.065470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.065837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.065854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.065862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.066035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.066214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.066223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.066230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.066236] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.078620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.079036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.079053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.079060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.079245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.079424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.079431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.079438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.079444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.091632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.092061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.092078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.092085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.092265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.092442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.092450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.092456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.092463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.104649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.105088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.105104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.105112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.105290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.105465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.105474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.105482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.105488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.117682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.118090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.118107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.118114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.118293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.118467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.118475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.118481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.118488] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.125827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:05.260 [2024-12-11 15:07:58.130697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.131134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.131151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.131165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.131338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.131512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.131521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.131531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.131537] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.143746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.144181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.144198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.144206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.144378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.144551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.144559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.144566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.144574] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.156765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.260 [2024-12-11 15:07:58.157197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.260 [2024-12-11 15:07:58.157214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.260 [2024-12-11 15:07:58.157222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.260 [2024-12-11 15:07:58.157395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.260 [2024-12-11 15:07:58.157569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.260 [2024-12-11 15:07:58.157578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.260 [2024-12-11 15:07:58.157586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.260 [2024-12-11 15:07:58.157592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.260 [2024-12-11 15:07:58.166181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.260 [2024-12-11 15:07:58.166206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.260 [2024-12-11 15:07:58.166217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.261 [2024-12-11 15:07:58.166225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.261 [2024-12-11 15:07:58.166232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.261 [2024-12-11 15:07:58.167558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.261 [2024-12-11 15:07:58.167665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.261 [2024-12-11 15:07:58.167666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.261 [2024-12-11 15:07:58.169857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.170298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.170316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.170329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.170514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.170687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.170695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.170701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.170708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.182935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.183307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.183327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.183335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.183514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.183692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.183700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.183707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.183714] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.196111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.196572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.196592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.196600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.196778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.196956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.196964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.196971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.196978] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.209197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.209608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.209626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.209635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.209813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.209997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.210005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.210012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.210019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.222410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.222840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.222860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.222868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.223046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.223227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.223235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.223243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.223250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.235476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.235896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.235913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.235921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.236099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.236282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.236291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.236298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.236305] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.248680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.249053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.249070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.249078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.249262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.249441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.249449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.249462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.249469] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.261838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.262272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.262290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.262298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.262476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.262656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.262664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.262671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.262677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.274882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.275317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.275335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.275343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.275520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.275698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.275707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.261 [2024-12-11 15:07:58.275713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.261 [2024-12-11 15:07:58.275720] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.261 [2024-12-11 15:07:58.287932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.261 [2024-12-11 15:07:58.288363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.261 [2024-12-11 15:07:58.288380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.261 [2024-12-11 15:07:58.288389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.261 [2024-12-11 15:07:58.288566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.261 [2024-12-11 15:07:58.288744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.261 [2024-12-11 15:07:58.288752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.262 [2024-12-11 15:07:58.288758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.262 [2024-12-11 15:07:58.288765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.262 [2024-12-11 15:07:58.301010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.262 [2024-12-11 15:07:58.301484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.262 [2024-12-11 15:07:58.301502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.262 [2024-12-11 15:07:58.301511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.262 [2024-12-11 15:07:58.301689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.262 [2024-12-11 15:07:58.301867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.262 [2024-12-11 15:07:58.301879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.262 [2024-12-11 15:07:58.301886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.262 [2024-12-11 15:07:58.301893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.520 [2024-12-11 15:07:58.314217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.520 [2024-12-11 15:07:58.314591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.520 [2024-12-11 15:07:58.314609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.520 [2024-12-11 15:07:58.314618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.520 [2024-12-11 15:07:58.314795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.520 [2024-12-11 15:07:58.314973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.520 [2024-12-11 15:07:58.314981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.520 [2024-12-11 15:07:58.314988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.520 [2024-12-11 15:07:58.314994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.520 [2024-12-11 15:07:58.327425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.520 [2024-12-11 15:07:58.327848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.520 [2024-12-11 15:07:58.327866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.520 [2024-12-11 15:07:58.327874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.520 [2024-12-11 15:07:58.328052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.520 [2024-12-11 15:07:58.328235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.520 [2024-12-11 15:07:58.328244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.520 [2024-12-11 15:07:58.328251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.520 [2024-12-11 15:07:58.328258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.520 [2024-12-11 15:07:58.340485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.520 [2024-12-11 15:07:58.340894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.520 [2024-12-11 15:07:58.340911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.520 [2024-12-11 15:07:58.340923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.520 [2024-12-11 15:07:58.341100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.520 [2024-12-11 15:07:58.341284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.520 [2024-12-11 15:07:58.341293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.520 [2024-12-11 15:07:58.341300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.520 [2024-12-11 15:07:58.341306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.520 [2024-12-11 15:07:58.353691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.520 [2024-12-11 15:07:58.354126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.520 [2024-12-11 15:07:58.354143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.520 [2024-12-11 15:07:58.354150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.520 [2024-12-11 15:07:58.354330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.520 [2024-12-11 15:07:58.354508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.520 [2024-12-11 15:07:58.354516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.520 [2024-12-11 15:07:58.354523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.520 [2024-12-11 15:07:58.354529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.520 [2024-12-11 15:07:58.366749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.367156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.367178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.367185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.367363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.367541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.367549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.367555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.367561] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.379938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.380351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.380368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.380375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.380552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.380733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.380741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.380748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.380754] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.393125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.393545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.393562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.393570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.393747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.393924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.393932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.393939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.393945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.406333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.406753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.406769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.406777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.406954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.407131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.407139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.407146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.407152] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.419537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.419954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.419970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.419978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.420156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.420339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.420347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.420358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.420365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.432594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.433009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.433026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.433033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.433215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.433394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.433402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.433409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.433415] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.445781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.446188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.446205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.446212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.446389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.446567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.446575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.446582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.446588] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.458960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.459304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.459321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.459328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.459505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.459683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.459690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.459697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.459703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.472084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.472519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.472535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.472543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.472720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.472897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.472905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.472911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.472918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 4635.17 IOPS, 18.11 MiB/s [2024-12-11T14:07:58.569Z] [2024-12-11 15:07:58.485271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.485613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.485631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.521 [2024-12-11 15:07:58.485639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.521 [2024-12-11 15:07:58.485816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.521 [2024-12-11 15:07:58.485994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.521 [2024-12-11 15:07:58.486002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.521 [2024-12-11 15:07:58.486009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.521 [2024-12-11 15:07:58.486015] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.521 [2024-12-11 15:07:58.498400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.521 [2024-12-11 15:07:58.498856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.521 [2024-12-11 15:07:58.498872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.498880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.499057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.499240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.499249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.499256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.499262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.522 [2024-12-11 15:07:58.511478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.522 [2024-12-11 15:07:58.511856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.522 [2024-12-11 15:07:58.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.511884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.512061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.512243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.512251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.512258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.512264] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.522 [2024-12-11 15:07:58.524651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.522 [2024-12-11 15:07:58.525082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.522 [2024-12-11 15:07:58.525098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.525105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.525287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.525465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.525473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.525479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.525485] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.522 [2024-12-11 15:07:58.537713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.522 [2024-12-11 15:07:58.538150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.522 [2024-12-11 15:07:58.538171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.538179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.538356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.538533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.538541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.538547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.538554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.522 [2024-12-11 15:07:58.550771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.522 [2024-12-11 15:07:58.551207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.522 [2024-12-11 15:07:58.551224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.551232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.551409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.551592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.551600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.551606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.551613] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.522 [2024-12-11 15:07:58.563884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.522 [2024-12-11 15:07:58.564332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.522 [2024-12-11 15:07:58.564350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.522 [2024-12-11 15:07:58.564358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.522 [2024-12-11 15:07:58.564537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.522 [2024-12-11 15:07:58.564716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.522 [2024-12-11 15:07:58.564724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.522 [2024-12-11 15:07:58.564730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.522 [2024-12-11 15:07:58.564737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.577028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.577475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.577493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.577501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.577679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.577857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.577866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.577873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.577879] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.590093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.590510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.590528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.590536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.590713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.590891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.590898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.590909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.590915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.603169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.603609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.603628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.603636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.603814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.603992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.604000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.604007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.604014] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.616234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.616599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.616616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.616624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.616801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.616979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.616987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.616993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.616999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.629393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.629828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.629844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.629852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.630029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.630218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.630227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.630234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.630241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.642453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.642893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.642910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.642918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.643095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.643276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.643285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.643291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.643298] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.655501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.655861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.655878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.655886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.656063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.781 [2024-12-11 15:07:58.656244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.781 [2024-12-11 15:07:58.656253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.781 [2024-12-11 15:07:58.656260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.781 [2024-12-11 15:07:58.656266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.781 [2024-12-11 15:07:58.668648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.781 [2024-12-11 15:07:58.669085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.781 [2024-12-11 15:07:58.669101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.781 [2024-12-11 15:07:58.669109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.781 [2024-12-11 15:07:58.669291] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.669470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.669478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.669485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.669491] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.681701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.682135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.682151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.682167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.682345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.682523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.682531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.682537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.682544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.694745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.695180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.695197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.695205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.695382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.695561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.695569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.695575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.695582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.707806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.708239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.708257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.708265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.708442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.708620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.708628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.708635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.708641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.720878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.721246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.721264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.721272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.721449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.721631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.721642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.721652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.721659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.734073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.734488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.734504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.734512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.734689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.734866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.734874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.734881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.734888] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.747278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.747643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.747659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.747667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.747844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.748022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.748030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.748037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.748043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.760418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.760850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.760866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.760873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.761051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.761235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.761243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.782 [2024-12-11 15:07:58.761254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.782 [2024-12-11 15:07:58.761260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.782 [2024-12-11 15:07:58.773467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.782 [2024-12-11 15:07:58.773819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.782 [2024-12-11 15:07:58.773835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.782 [2024-12-11 15:07:58.773842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.782 [2024-12-11 15:07:58.774019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.782 [2024-12-11 15:07:58.774201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.782 [2024-12-11 15:07:58.774210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.783 [2024-12-11 15:07:58.774217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.783 [2024-12-11 15:07:58.774223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.783 [2024-12-11 15:07:58.786606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.783 [2024-12-11 15:07:58.787044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-12-11 15:07:58.787060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.783 [2024-12-11 15:07:58.787068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.783 [2024-12-11 15:07:58.787248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.783 [2024-12-11 15:07:58.787426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.783 [2024-12-11 15:07:58.787434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.783 [2024-12-11 15:07:58.787441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.783 [2024-12-11 15:07:58.787448] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.783 [2024-12-11 15:07:58.799660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.783 [2024-12-11 15:07:58.800095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-12-11 15:07:58.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.783 [2024-12-11 15:07:58.800119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.783 [2024-12-11 15:07:58.800303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.783 [2024-12-11 15:07:58.800481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.783 [2024-12-11 15:07:58.800489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.783 [2024-12-11 15:07:58.800496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.783 [2024-12-11 15:07:58.800503] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.783 [2024-12-11 15:07:58.812721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:05.783 [2024-12-11 15:07:58.813169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.783 [2024-12-11 15:07:58.813186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:05.783 [2024-12-11 15:07:58.813194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:05.783 [2024-12-11 15:07:58.813371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:05.783 [2024-12-11 15:07:58.813550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:05.783 [2024-12-11 15:07:58.813558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:05.783 [2024-12-11 15:07:58.813565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:05.783 [2024-12-11 15:07:58.813571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:05.783 [2024-12-11 15:07:58.825865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.826224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.826245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.826254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.826432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.826611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.826621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.826627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.826634] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.839069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.839407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.839426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.839434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.839613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.839792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.839801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.839808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.839815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.852231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.852691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.852709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.852722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.852900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.853078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.853086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.853093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.853100] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.865318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.865678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.865695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.865703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.865880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.866058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.866066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.866073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.866079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.878462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.878814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.878831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.878839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.879016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.879201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.879210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.879216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.879223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 [2024-12-11 15:07:58.891615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.891959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.891980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.891988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.892170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.892350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.892358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.892368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.892374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.904783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.905065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.905082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.905089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.905271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.905451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.905459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.905465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.905472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 [2024-12-11 15:07:58.917870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.042 [2024-12-11 15:07:58.918152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.042 [2024-12-11 15:07:58.918174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.042 [2024-12-11 15:07:58.918182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.042 [2024-12-11 15:07:58.918359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.042 [2024-12-11 15:07:58.918536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.042 [2024-12-11 15:07:58.918544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.042 [2024-12-11 15:07:58.918551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.042 [2024-12-11 15:07:58.918557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.042 [2024-12-11 15:07:58.923728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.042 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.042 [2024-12-11 15:07:58.930962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.043 [2024-12-11 15:07:58.931262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-12-11 15:07:58.931279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-12-11 15:07:58.931286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.043 [2024-12-11 15:07:58.931464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.043 [2024-12-11 15:07:58.931643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.043 [2024-12-11 15:07:58.931651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.043 [2024-12-11 15:07:58.931658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.043 [2024-12-11 15:07:58.931664] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.043 [2024-12-11 15:07:58.944058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.043 [2024-12-11 15:07:58.944476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-12-11 15:07:58.944493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-12-11 15:07:58.944501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.043 [2024-12-11 15:07:58.944678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.043 [2024-12-11 15:07:58.944855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.043 [2024-12-11 15:07:58.944863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.043 [2024-12-11 15:07:58.944870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.043 [2024-12-11 15:07:58.944876] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.043 [2024-12-11 15:07:58.957113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.043 [2024-12-11 15:07:58.957408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-12-11 15:07:58.957425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-12-11 15:07:58.957433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.043 [2024-12-11 15:07:58.957611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.043 [2024-12-11 15:07:58.957788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.043 [2024-12-11 15:07:58.957796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.043 [2024-12-11 15:07:58.957803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.043 [2024-12-11 15:07:58.957810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.043 Malloc0 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 [2024-12-11 15:07:58.970207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.043 [2024-12-11 15:07:58.970586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:06.043 [2024-12-11 15:07:58.970603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18b11a0 with addr=10.0.0.2, port=4420 00:27:06.043 [2024-12-11 15:07:58.970611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18b11a0 is same with the state(6) to be set 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.043 [2024-12-11 15:07:58.970787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18b11a0 (9): Bad file descriptor 00:27:06.043 [2024-12-11 15:07:58.970967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:06.043 [2024-12-11 15:07:58.970975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:06.043 [2024-12-11 15:07:58.970983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:06.043 [2024-12-11 15:07:58.970989] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:06.043 [2024-12-11 15:07:58.981820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:06.043 [2024-12-11 15:07:58.983386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.043 15:07:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3265358 00:27:06.043 [2024-12-11 15:07:59.058811] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:07.540 4634.43 IOPS, 18.10 MiB/s [2024-12-11T14:08:01.519Z] 5442.12 IOPS, 21.26 MiB/s [2024-12-11T14:08:02.890Z] 6069.11 IOPS, 23.71 MiB/s [2024-12-11T14:08:03.822Z] 6582.20 IOPS, 25.71 MiB/s [2024-12-11T14:08:04.754Z] 6982.27 IOPS, 27.27 MiB/s [2024-12-11T14:08:05.686Z] 7325.42 IOPS, 28.61 MiB/s [2024-12-11T14:08:06.618Z] 7617.69 IOPS, 29.76 MiB/s [2024-12-11T14:08:07.550Z] 7867.79 IOPS, 30.73 MiB/s [2024-12-11T14:08:07.550Z] 8082.33 IOPS, 31.57 MiB/s 00:27:14.502 Latency(us) 00:27:14.502 [2024-12-11T14:08:07.550Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:14.502 Verification LBA range: start 0x0 length 0x4000 00:27:14.502 Nvme1n1 : 15.01 8086.95 31.59 12953.27 0.00 6063.54 441.66 23251.03 00:27:14.502 [2024-12-11T14:08:07.550Z] =================================================================================================================== 00:27:14.502 [2024-12-11T14:08:07.550Z] Total : 8086.95 31.59 12953.27 0.00 6063.54 441.66 23251.03 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:14.760 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:14.761 rmmod nvme_tcp 00:27:14.761 rmmod nvme_fabrics 00:27:14.761 rmmod nvme_keyring 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3266355 ']' 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3266355 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3266355 ']' 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3266355 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3266355 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3266355' 00:27:14.761 killing process with pid 3266355 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3266355 00:27:14.761 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3266355 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.019 15:08:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:17.555 00:27:17.555 real 0m26.197s 00:27:17.555 user 1m1.515s 00:27:17.555 sys 0m6.747s 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:17.555 ************************************ 00:27:17.555 END TEST nvmf_bdevperf 00:27:17.555 ************************************ 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.555 15:08:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.556 ************************************ 00:27:17.556 START TEST nvmf_target_disconnect 00:27:17.556 ************************************ 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:17.556 * Looking for test storage... 00:27:17.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.556 --rc genhtml_branch_coverage=1 00:27:17.556 --rc genhtml_function_coverage=1 00:27:17.556 --rc genhtml_legend=1 00:27:17.556 --rc geninfo_all_blocks=1 00:27:17.556 --rc geninfo_unexecuted_blocks=1 00:27:17.556 00:27:17.556 ' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.556 --rc genhtml_branch_coverage=1 00:27:17.556 --rc genhtml_function_coverage=1 00:27:17.556 --rc genhtml_legend=1 00:27:17.556 --rc geninfo_all_blocks=1 00:27:17.556 --rc geninfo_unexecuted_blocks=1 00:27:17.556 00:27:17.556 ' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.556 --rc genhtml_branch_coverage=1 00:27:17.556 --rc genhtml_function_coverage=1 00:27:17.556 --rc genhtml_legend=1 00:27:17.556 --rc geninfo_all_blocks=1 00:27:17.556 --rc geninfo_unexecuted_blocks=1 00:27:17.556 00:27:17.556 ' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:17.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.556 --rc genhtml_branch_coverage=1 00:27:17.556 --rc genhtml_function_coverage=1 00:27:17.556 --rc genhtml_legend=1 00:27:17.556 --rc geninfo_all_blocks=1 00:27:17.556 --rc geninfo_unexecuted_blocks=1 00:27:17.556 00:27:17.556 ' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.556 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:27:17.557 15:08:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:24.125 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:24.125 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:24.125 Found net devices under 0000:86:00.0: cvl_0_0 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:24.125 Found net devices under 0000:86:00.1: cvl_0_1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:24.125 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:24.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:24.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:27:24.126 00:27:24.126 --- 10.0.0.2 ping statistics --- 00:27:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.126 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:24.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:24.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:27:24.126 00:27:24.126 --- 10.0.0.1 ping statistics --- 00:27:24.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:24.126 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 ************************************ 00:27:24.126 START TEST nvmf_target_disconnect_tc1 00:27:24.126 ************************************ 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect ]] 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:24.126 [2024-12-11 15:08:16.505651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.126 [2024-12-11 15:08:16.505700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178aac0 with addr=10.0.0.2, port=4420 00:27:24.126 [2024-12-11 15:08:16.505724] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:24.126 [2024-12-11 15:08:16.505737] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:24.126 [2024-12-11 15:08:16.505743] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:27:24.126 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:24.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect: errors occurred 00:27:24.126 Initializing NVMe Controllers 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:24.126 00:27:24.126 real 0m0.106s 00:27:24.126 user 0m0.051s 00:27:24.126 sys 0m0.054s 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 ************************************ 00:27:24.126 END TEST nvmf_target_disconnect_tc1 00:27:24.126 ************************************ 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 ************************************ 00:27:24.126 START TEST nvmf_target_disconnect_tc2 00:27:24.126 ************************************ 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3271490 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3271490 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3271490 ']' 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 [2024-12-11 15:08:16.644044] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:24.126 [2024-12-11 15:08:16.644084] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.126 [2024-12-11 15:08:16.723747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:24.126 [2024-12-11 15:08:16.765716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:24.126 [2024-12-11 15:08:16.765754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:24.126 [2024-12-11 15:08:16.765761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:24.126 [2024-12-11 15:08:16.765767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:24.126 [2024-12-11 15:08:16.765772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:24.126 [2024-12-11 15:08:16.767325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:24.126 [2024-12-11 15:08:16.767348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:24.126 [2024-12-11 15:08:16.767435] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:24.126 [2024-12-11 15:08:16.767437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.126 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.126 Malloc0 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 [2024-12-11 15:08:16.933664] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 [2024-12-11 15:08:16.965943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3271567 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:24.127 15:08:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:26.029 15:08:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3271490 00:27:26.029 15:08:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.029 starting I/O failed 00:27:26.029 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 [2024-12-11 15:08:18.998014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 [2024-12-11 15:08:18.998223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 [2024-12-11 15:08:18.998432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Read completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.030 starting I/O failed 00:27:26.030 Write completed with error (sct=0, sc=8) 00:27:26.031 starting I/O failed 00:27:26.031 Read completed with error (sct=0, sc=8) 00:27:26.031 starting I/O failed 00:27:26.031 Read completed with error (sct=0, sc=8) 00:27:26.031 starting I/O failed 00:27:26.031 Read completed with error (sct=0, sc=8) 00:27:26.031 starting I/O failed 00:27:26.031 Write completed with error (sct=0, sc=8) 00:27:26.031 starting I/O failed 00:27:26.031 [2024-12-11 15:08:18.998626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.031 [2024-12-11 15:08:18.998903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.998929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:18.999966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:18.999977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.000940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.000973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.001145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.001187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.001380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.001414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.001665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.001699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.001917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.001950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.002124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.002168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.002278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.002311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.002501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.002534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.002728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.002762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.002944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.002977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.003275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.003302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.003430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.003457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.003685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.003712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.003888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.004100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.004134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.004343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.004378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.004494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.004527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.004698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.004731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.004998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.005032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.005165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.005198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.005384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.005418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.005620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.031 [2024-12-11 15:08:19.005654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.031 qpair failed and we were unable to recover it. 00:27:26.031 [2024-12-11 15:08:19.005942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.005969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.006122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.006149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.006248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.006273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.006372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.006397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.006641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.006689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.006974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.007007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.007198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.007230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.007399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.007431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.007596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.007628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.007819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.007870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.008049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.008083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.008209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.008244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.008442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.008476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.008615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.008649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.008848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.008878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.009061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.009093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.009353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.009386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.009543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.009814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.009849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.010052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.010086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.010280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.010316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.010515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.010549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.010717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.010751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.011921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.011955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.012084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.012118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.012403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.012438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.012612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.012651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.012847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.012881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.012997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.013031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.013274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.013310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.013552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.013586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.013759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.013792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.013989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.014023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.014133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.032 [2024-12-11 15:08:19.014174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.032 qpair failed and we were unable to recover it. 00:27:26.032 [2024-12-11 15:08:19.014349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.014383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.014570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.014604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.014805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.014838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.015026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.015061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.015245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.015280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.015403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.015437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.015650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.015685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.015855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.015889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.016074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.016108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.016323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.016359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.016613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.016648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.016772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.016806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.017079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.017113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.017280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.017316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.017490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.017524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.017698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.017732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.017982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.018017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.018214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.018249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.018426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.018459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.018571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.018610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.018821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.018855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.019050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.019084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.019359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.019394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.019591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.019624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.019897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.019930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.020101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.020136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.020288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.020324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.020588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.020621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.020905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.020940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.021212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.021247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.021429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.021463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.021634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.021669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.021922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.021957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.022132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.022178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.022351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.022385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.022592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.022627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.022806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.022839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.022954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.022987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.023279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.023327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.033 [2024-12-11 15:08:19.023468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.033 [2024-12-11 15:08:19.023502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.033 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.023695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.023985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.024019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.024270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.024427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.024461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.024632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.024666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.024872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.024905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.025176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.025222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.025338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.025371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.025485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.025519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.025699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.025732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.026015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.026050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.026318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.026353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.026637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.026671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.026947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.026981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.027207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.027243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.027457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.027491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.027769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.027802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.028099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.028133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.028338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.028372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.028551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.028584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.028763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.028798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.028966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.029001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.029186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.029221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.029431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.029464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.029708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.029741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.029912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.029946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.030115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.030149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.030336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.030371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.030539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.030572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.030765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.030799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.031039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.031074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.031184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.031220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.034 [2024-12-11 15:08:19.031492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.034 [2024-12-11 15:08:19.031526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.034 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.031724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.031759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.031938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.031973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.032088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.032121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.032383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.032418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.032544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.032577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.032745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.032779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.032884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.032917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.033171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.033205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.033484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.033519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.033795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.033829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.034024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.034057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.034253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.034288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.034462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.034496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.034682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.034714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.034909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.034948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.035121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.035155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.035400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.035434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.035617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.035651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.035846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.035880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.036180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.036215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.036424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.036603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.036638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.036917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.036951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.037212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.037248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.037479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.037513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.037699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.037733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.037904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.038203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.038237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.038367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.038402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.038598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.038631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.038751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.038784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.039045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.039079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.039263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.039298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.039560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.039594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.039780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.039814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.039920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.039953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.040132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.040175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.040283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.040316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.040557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.035 [2024-12-11 15:08:19.040591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.035 qpair failed and we were unable to recover it. 00:27:26.035 [2024-12-11 15:08:19.040774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.041004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.041038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.041233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.041274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.041519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.041553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.041667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.041701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.041913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.041946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.042139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.042182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.042380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.042415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.042670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.042704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.042888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.042922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.043092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.043126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.043418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.043454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.043733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.043767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.043983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.044016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.044268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.044304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.044428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.044462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.044669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.044704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.044894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.044928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.045180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.045216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.045459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.045493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.045664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.045699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.045812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.045847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.046075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.046109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.046300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.046335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.046471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.046506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.046770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.046803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.047068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.047102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.047398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.047434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.047542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.047576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.047689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.047728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.047835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.047866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.048169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.048205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.048414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.048452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.048645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.048678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.048799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.048832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.049016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.049050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.049230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.049266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.049438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.049472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.049748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.049782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.036 [2024-12-11 15:08:19.050100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.036 [2024-12-11 15:08:19.050135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.036 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.050362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.050397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.050667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.050701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.050978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.051012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.051140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.051184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.051382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.051417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.051540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.051572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.051769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.051802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.051992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.052025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.052314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.052349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.052546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.052581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.052771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.052805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.052985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.053019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.053127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.053172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.053347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.053382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.053590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.053623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.053835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.053869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.054064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.054097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.054292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.054327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.054546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.054580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.054753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.054787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.055050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.055084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.055282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.055318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.055569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.055604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.055722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.055758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.055927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.055963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.056085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.056120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.056254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.056288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.056418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.056451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.056649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.056685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.056880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.056914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.057090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.057126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.057348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.057384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.057572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.057605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.057919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.057954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.058227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.058263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.058463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.058497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.058740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.058774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.058976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.059010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.059205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.037 [2024-12-11 15:08:19.059242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.037 qpair failed and we were unable to recover it. 00:27:26.037 [2024-12-11 15:08:19.059468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.059502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.059674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.059709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.059887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.059921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.060204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.060241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.060360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.060392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.060570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.060604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.060876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.060911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.061154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.061199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.061393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.061428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.061535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.061569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.061786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.062017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.062052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.062244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.062278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.062455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.062488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.062677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.062710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.062986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.063021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.063197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.063233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.063407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.063441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.063562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.063601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.063843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.063877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.064121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.064155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.064446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.064481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.064701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.064736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.064850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.064883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.065073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.065109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.065265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.065312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.065542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.065591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.065875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.065915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.066182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.066218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.066464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.066499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.066790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.066823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.067011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.067045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.067255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.067563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.067597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.067731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.067767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.038 qpair failed and we were unable to recover it. 00:27:26.038 [2024-12-11 15:08:19.067894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.038 [2024-12-11 15:08:19.067943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.068197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.068250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.068395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.068429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.068561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.068594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.068704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.068735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.068851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.068884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.069082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.069115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.069408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.069442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.069630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.069662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.069792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.069837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.070063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.070123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.070299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.039 [2024-12-11 15:08:19.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.039 qpair failed and we were unable to recover it. 00:27:26.039 [2024-12-11 15:08:19.070488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.070523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.070701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.070736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.070983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.071016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.071148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.071312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.071543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.071578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.071872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.071908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.072179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.072215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.072402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.072436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.072674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.072709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.072938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.072972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.073106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.073141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.073351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.073387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.073523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.073557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.073760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.073794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.073965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.073998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.074419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.074453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.074575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.074611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.074975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.075009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.075135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.075190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.075472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.075506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.075615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.075650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.075779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.075812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.075997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.076031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.076208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.076245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.076421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.076463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.076654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.076688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.076864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.076898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.077077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.077110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.077342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.077376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.077550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.077584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.077778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.317 [2024-12-11 15:08:19.077814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.317 qpair failed and we were unable to recover it. 00:27:26.317 [2024-12-11 15:08:19.078001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.078142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.078650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.078812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.078958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.078993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.079254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.079291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.079426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.079461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.079658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.079692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.079881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.079915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.080115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.080149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.080338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.080372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.080542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.080918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.080954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.081228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.081265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.081508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.081543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.081749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.081784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.081958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.081993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.082118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.082153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.082338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.082373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.082569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.082603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.082874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.082908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.083093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.083126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.083354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.083391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.083566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.083599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.083889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.083924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.084123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.084172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.084448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.084482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.084679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.084713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.084925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.084960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.085176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.085213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.085414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.085449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.085629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.318 [2024-12-11 15:08:19.085665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.318 qpair failed and we were unable to recover it. 00:27:26.318 [2024-12-11 15:08:19.085924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.085958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.086141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.086198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.086457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.086499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.086700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.086736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.086916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.086953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.087136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.087205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.087510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.087552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.087679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.087713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.087891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.087925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.088185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.088222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.088419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.088454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.088654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.088688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.088873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.088907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.089095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.089130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.089347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.089382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.089596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.089630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.089889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.089923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.090193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.090230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.090556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.090590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.090798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.090834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.091009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.091045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.091205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.091241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.091433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.091469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.091606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.091639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.091908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.091942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.092208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.092245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.092360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.092395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.092593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.092628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.092889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.093212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.093249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.093427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.093462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.093641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.093677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.093952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.093988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.094112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.094146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.094331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.319 [2024-12-11 15:08:19.094365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.319 qpair failed and we were unable to recover it. 00:27:26.319 [2024-12-11 15:08:19.094493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.094546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.094724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.094759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.094936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.094971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.095181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.095220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.095420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.095454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.095585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.095621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.095817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.095853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.096053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.096089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.096308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.096591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.096730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.096765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.096945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.096979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.097964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.097998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.098118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.098153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.098292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.098326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.098503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.098543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.098734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.098768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.098880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.098915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.099044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.099296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.099445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.099479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.099675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.099710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.099882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.099919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.100126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.100176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.100372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.100405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.100514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.100548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.100681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.100716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.100919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.100955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.101135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.101183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.101371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.101407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.101513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.320 [2024-12-11 15:08:19.101546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.320 qpair failed and we were unable to recover it. 00:27:26.320 [2024-12-11 15:08:19.101756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.101790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.101981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.102015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.102132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.102179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.102381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.102415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.102596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.102629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.102808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.102842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.103914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.103948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.104075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.104110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.104246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.104281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.104411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.104445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.104601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.104637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.104819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.104854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.105102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.105137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.105326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.105362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.105473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.105518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.105644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.105679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.105802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.105837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.106834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.106868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.107127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.107190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.107369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.107405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.107530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.107564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.107681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.107715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.107828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.107864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.108129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.108178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.108429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.108463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.321 qpair failed and we were unable to recover it. 00:27:26.321 [2024-12-11 15:08:19.108580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.321 [2024-12-11 15:08:19.108614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.108725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.108758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.108871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.108905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.109182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.109219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.109405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.109439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.109666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.109700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.109959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.109993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.110218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.110254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.110368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.110401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.110528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.110733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.110767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.110892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.110927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.111042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.111074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.111253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.111288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.111465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.111499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.111793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.111828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.112018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.112052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.112326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.112367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.112546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.112580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.112699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.112734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.112923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.112956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.113129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.113175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.113302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.113334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.113458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.113493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.113710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.113909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.113944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.114066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.114100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.114221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.114257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.114376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.114409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.322 [2024-12-11 15:08:19.114581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.322 [2024-12-11 15:08:19.114615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.322 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.114731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.114767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.114983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.115211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.115357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.115568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.115779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.115926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.115960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.116156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.116201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.116345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.116381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.116582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.116615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.116738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.116771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.116876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.116910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.117085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.117118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.117397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.117434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.117552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.117590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.117765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.117799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.117911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.117948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.118121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.118154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.118353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.118387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.118577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.118609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.118790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.118825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.119050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.119267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.119426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.119578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.119812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.119985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.120196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.120410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.120559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.120698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.120908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.120943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.121136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.121178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.121355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.121388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.121506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.323 [2024-12-11 15:08:19.121547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.323 qpair failed and we were unable to recover it. 00:27:26.323 [2024-12-11 15:08:19.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.121751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.122018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.122053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.122300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.122334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.122458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.122492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.122597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.122629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.122851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.122884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.123008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.123048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.123196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.123233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.123428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.123467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.123593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.123626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.123829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.123864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.124044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.124079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.124397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.124434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.124556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.124590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.124775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.124809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.124933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.124966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.125138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.125182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.125377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.125411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.125587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.125621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.125868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.125902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.126197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.126233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.126507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.126541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.126725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.126759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.126934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.126969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.127141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.127184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.127363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.127396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.127572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.127605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.127731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.127764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.127956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.127991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.128199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.128234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.128409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.128443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.128567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.128600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.128716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.128750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.128856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.128888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.129146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.129189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.324 qpair failed and we were unable to recover it. 00:27:26.324 [2024-12-11 15:08:19.129423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.324 [2024-12-11 15:08:19.129457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.129582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.129615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.129825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.129861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.129995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.130029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.130260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.130295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.130475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.130515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.130690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.130724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.130847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.130881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.131151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.131210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.131388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.131421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.131596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.131630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.131739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.131770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.132040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.132081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.132273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.132307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.132426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.132460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.132672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.132883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.132917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.133089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.133122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.133239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.133272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.133404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.133439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.133708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.133742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.133862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.133896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.134110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.134144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.134284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.134317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.134488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.134521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.134638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.134672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.134902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.134937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.135113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.135146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.135337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.135370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.135568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.135601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.135788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.135998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.136034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.136150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.136196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.136473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.136507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.136631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.136663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.325 [2024-12-11 15:08:19.136847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.325 [2024-12-11 15:08:19.136879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.325 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.137150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.137196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.137500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.137535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.137836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.137872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.138142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.138194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.138455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.138490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.138748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.138783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.138904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.138937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.139208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.139246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.139514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.139549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.139728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.139762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.139880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.139915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.140181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.140217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.140355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.140387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.140634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.140668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.140844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.140878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.141141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.141189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.141444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.141479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.141750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.141785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.142031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.142064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.142178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.142214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.142486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.142625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.142659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.142963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.142997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.143111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.143145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.143338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.143374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.143573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.143607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.143818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.143853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.143972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.144005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.144276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.144314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.144492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.144528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.144654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.144694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.144986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.145021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.145146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.145191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.145392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.145426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.145613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.145646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.326 qpair failed and we were unable to recover it. 00:27:26.326 [2024-12-11 15:08:19.145905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.326 [2024-12-11 15:08:19.145940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.146059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.146092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.146373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.146409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.146659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.146693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.146949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.146983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.147203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.147239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.147435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.147470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.147648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.147682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.147789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.147824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.148002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.148036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.148210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.148245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.148380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.148414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.148547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.148583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.148889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.148922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.149106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.149139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.149418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.149453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.149628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.149661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.149867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.149901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.150173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.150209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.150339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.150372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.150562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.150597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.150796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.150829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.150954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.150988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.151113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.151148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.151337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.151371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.151567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.151601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.151708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.151744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.151930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.151964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.152137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.327 [2024-12-11 15:08:19.152222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.327 qpair failed and we were unable to recover it. 00:27:26.327 [2024-12-11 15:08:19.152423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.152456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.152699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.152734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.152947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.152979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.153149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.153369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.153406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.153538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.153571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.153787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.153821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.153955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.153987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.154272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.154309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.154421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.154454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.154579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.154614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.154867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.154902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.155023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.155058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.155318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.155354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.155577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.155611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.155848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.155882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.156060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.156096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.156301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.156337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.156541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.156576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.156845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.156880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.157067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.157101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.157232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.157266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.157381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.157416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.157606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.157642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.157818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.157851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.158047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.158082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.158269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.158303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.158506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.158729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.158764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.158942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.158975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.159178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.159214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.159325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.159359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.159536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.159571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.159749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.159785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.160108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.160150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.160391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.160426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.160705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.160740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.160865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.328 [2024-12-11 15:08:19.160900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.328 qpair failed and we were unable to recover it. 00:27:26.328 [2024-12-11 15:08:19.161147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.161194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.161324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.161358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.161558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.161594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.161854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.161891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.162926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.162962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.163144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.163193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.163503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.163540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.163733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.163768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.163972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.164007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.164295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.164332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.164511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.164815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.164850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.165962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.165997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.166130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.166181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.166375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.166409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.166534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.166569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.166821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.166858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.167049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.167085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.167231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.167266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.167473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.167507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.167635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.167672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.167873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.167909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.168137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.168207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.168339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.168374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.168550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.168584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.168760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.168794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.168969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.329 [2024-12-11 15:08:19.169003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.329 qpair failed and we were unable to recover it. 00:27:26.329 [2024-12-11 15:08:19.169236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.169271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.169452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.169484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.169683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.169719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.169905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.169940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.170069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.170103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.170325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.170359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.170554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.170588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.170861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.170897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.171171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.171208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.171343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.171376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.171560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.171593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.171705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.171738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.172008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.172043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.172230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.172266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.172404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.172438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.172637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.172671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.172907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.172941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.173188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.173226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.173414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.173448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.173639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.173674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.173805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.173839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.174049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.174084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.174219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.174259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.174446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.174481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.174622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.174655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.174922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.174958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.175082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.175117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.175362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.175512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.175546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.175812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.175847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.175973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.176008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.176258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.176294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.176480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.176514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.176704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.176738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.177024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.177059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.177253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.330 [2024-12-11 15:08:19.177289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.330 qpair failed and we were unable to recover it. 00:27:26.330 [2024-12-11 15:08:19.177494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.177530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.177709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.177745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.177947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.177983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.178100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.178134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.178366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.178402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.178516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.178550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.178677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.178712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.179918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.180207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.180244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.180495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.180530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.180653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.180688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.180901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.180936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.181212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.181248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.181371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.181412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.181541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.181576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.181708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.181743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.182001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.182038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.182310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.182346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.182624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.182659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.182859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.182894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.183181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.183219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.183331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.183364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.183551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.183587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.183789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.183823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.184120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.184172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.184306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.184340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.331 qpair failed and we were unable to recover it. 00:27:26.331 [2024-12-11 15:08:19.184536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.331 [2024-12-11 15:08:19.184570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.184757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.184795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.185110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.185306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.185547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.185722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.185874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.185985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.186022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.186310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.186348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.186596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.186631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.186867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.186903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.187027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.187063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.187184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.187220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.187439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.187474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.187587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.187628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.187923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.187958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.188140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.188188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.188372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.188407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.188516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.188549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.188812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.188847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.189047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.189083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.189264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.189299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.189565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.189601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.189715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.189750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.190001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.190037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.190223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.190261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.190469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.190504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.190636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.190671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.190802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.190837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.191117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.191152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.191377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.191413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.191539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.191572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.191683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.191714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.191901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.191934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.192200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.192237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.192354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.192388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.192576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.332 [2024-12-11 15:08:19.192609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.332 qpair failed and we were unable to recover it. 00:27:26.332 [2024-12-11 15:08:19.192745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.192781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.192987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.193022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.193147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.193192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.193463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.193497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.193678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.193718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.193900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.193936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.194060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.194093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.194312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.194350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.194485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.194521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.194805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.194840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.195044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.195078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.195204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.195243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.195466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.195499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.195622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.195656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.195773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.196058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.196094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.196291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.196327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.196535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.196571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.196705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.196740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.196923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.196956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.197065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.197100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.197324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.197360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.197487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.197522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.197641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.197675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.197985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.198020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.198209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.198245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.198374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.198410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.198606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.198640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.198758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.198791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.199009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.199044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.199263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.199299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.199485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.199520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.199653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.199688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.199896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.199939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.200146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.200193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.200340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.200374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.200516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.333 [2024-12-11 15:08:19.200550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.333 qpair failed and we were unable to recover it. 00:27:26.333 [2024-12-11 15:08:19.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.200712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.200924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.200959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.201141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.201187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.201466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.201502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.201622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.201658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.201868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.201903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.202089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.202125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.202433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.202468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.202758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.202851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.203011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.203052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.203247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.203286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.203545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.203581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.203862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.203897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.204046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.204091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.204327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.204364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.204617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.204654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.204842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.204878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.205070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.205106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.205240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.205277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.205427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.205462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.205588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.205624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.205883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.205928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.206118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.206152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.206294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.206330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.206552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.206587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.206767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.206803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.207057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.207092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.207351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.207389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.207599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.207633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.207768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.207803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.207986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.208021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.208301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.208337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.208504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.208709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.208745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.208946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.208981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.334 qpair failed and we were unable to recover it. 00:27:26.334 [2024-12-11 15:08:19.209201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.334 [2024-12-11 15:08:19.209237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.209421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.209455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.209662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.209696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.209927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.210241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.210276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.210411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.210445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.210639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.210674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.210944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.210979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.211167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.211204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.211319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.211494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.211529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.211686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.211721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.212952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.212987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.213124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.213170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.213306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.213341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.213470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.213505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.213631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.213667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.213966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.214000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.214181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.214219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.214424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.214459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.214638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.214674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.214850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.214884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.215009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.215045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.215323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.215359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.215614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.215649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.215768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.215804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.215928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.215963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.216202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.216394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.216428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.216613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.216647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.216776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.335 [2024-12-11 15:08:19.216812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.335 qpair failed and we were unable to recover it. 00:27:26.335 [2024-12-11 15:08:19.217072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.217107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.217273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.217310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.217589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.217626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.217851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.217886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.218066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.218108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.218334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.218370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.218589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.218624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.218762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.218797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.218986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.219022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.219256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.219293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.219619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.219655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.219779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.219814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.219961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.219997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.220194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.220230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.220418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.220453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.220640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.220676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.220989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.221025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.221231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.221268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.221461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.221497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.221703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.221738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.221924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.221959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.222076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.222112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.222408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.222444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.222648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.222683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.222881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.222916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.223102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.223138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.223280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.223317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.223453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.223494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.223717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.223956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.223989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.224183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.224218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.224476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.336 [2024-12-11 15:08:19.224517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.336 qpair failed and we were unable to recover it. 00:27:26.336 [2024-12-11 15:08:19.224653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.224688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.224814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.224850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.224962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.224994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.225250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.225285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.225492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.225527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.225653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.225809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.225844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.225971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.226006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.226286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.226323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.226451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.226486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.226739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.226774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.227075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.227111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.227243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.227278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.227573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.227610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.227961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.228294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.228331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.228609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.228644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.228847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.228883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.229180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.229217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.229478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.229513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.229813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.229848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.230107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.230144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.230448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.230484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.230689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.230725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.230917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.230953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.231139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.231184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.231463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.231498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.231638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.231673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.231951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.231985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.232191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.232228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.232365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.232400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.232696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.232732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.232922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.232957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.233181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.233218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.233424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.233460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.233750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.233785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.337 qpair failed and we were unable to recover it. 00:27:26.337 [2024-12-11 15:08:19.234003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.337 [2024-12-11 15:08:19.234039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.234178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.234214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.234378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.234412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.234642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.234676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.234891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.234925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.235127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.235172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.235305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.235340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.235467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.235504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.235622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.235654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.235834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.235870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.236052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.236087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.236285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.236321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.236599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.236633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.236918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.236953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.237173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.237210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.237407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.237442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.237576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.237612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.237837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.238037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.238279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.238314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.238462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.238495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.238725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.238760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.238904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.238939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.239172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.239209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.239333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.239369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.239546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.239722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.239755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.240038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.240073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.240266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.240302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.240487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.240522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.240652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.240864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.240903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.241095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.241129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.241345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.241380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.241511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.241546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.241727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.241762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.241968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.242003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.338 qpair failed and we were unable to recover it. 00:27:26.338 [2024-12-11 15:08:19.242283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.338 [2024-12-11 15:08:19.242321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.242601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.242637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.242955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.242990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.243263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.243300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.243506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.243541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.243739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.243776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.244064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.244099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.244305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.244342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.244556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.244592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.244896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.244934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.245141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.245191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.245310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.245345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.245502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.245723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.245757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.245977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.246011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.246281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.246317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.246443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.246476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.246623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.246658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.246908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.246944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.247131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.247179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.247372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.247407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.247554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.247595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.247735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.247769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.247948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.247982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.248180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.248216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.248353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.248388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.248532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.248566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.248682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.248716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.248835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.248870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.249066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.249101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.249303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.249340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.249563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.249799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.249834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.249965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.250000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.250282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.250321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.250467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.250503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.339 qpair failed and we were unable to recover it. 00:27:26.339 [2024-12-11 15:08:19.250709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.339 [2024-12-11 15:08:19.250745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.250950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.250986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.251195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.251230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.251459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.251494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.251611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.251646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.251917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.251953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.252077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.252113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.252286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.252323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.252476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.252512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.252708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.252743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.252950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.252984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.253229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.253266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.253424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.253601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.253636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.253882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.253916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.254069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.254103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.254299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.254333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.254464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.254496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.254704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.254739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.254934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.254969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.255183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.255220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.255346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.255381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.255533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.255566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.255872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.255907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.256090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.256125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.256342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.256377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.256569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.256604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.256736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.256772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.257006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.257040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.257235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.257271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.257397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.257429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.257618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.257653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.257776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.257809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.258020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.258055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.258287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.258322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.340 [2024-12-11 15:08:19.258537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.340 qpair failed and we were unable to recover it. 00:27:26.340 [2024-12-11 15:08:19.258730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.258764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.258874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.258905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.259100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.259136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.259302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.259341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.259482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.259516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.259645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.259680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.260192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.260432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.260467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.260648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.260682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.260904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.260939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.261086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.261120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.261288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.261323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.261465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.261499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.261645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.261679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.261808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.261843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.262041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.262076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.262236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.262273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.262404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.262439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.262641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.262675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.262912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.262946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.263151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.263197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.263406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.263440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.263646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.263683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.264031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.264068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.264264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.264299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.264425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.264462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.264654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.264697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.264866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.264903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.265189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.265225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.341 qpair failed and we were unable to recover it. 00:27:26.341 [2024-12-11 15:08:19.265372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.341 [2024-12-11 15:08:19.265408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.265602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.265638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.265770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.265804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.265988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.266023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.266228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.266263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.266381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.266415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.266608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.266643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.266831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.266868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.267152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.267198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.267330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.267571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.267606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.267724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.267758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.267957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.267993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.268107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.268141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.268283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.268323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.268527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.268563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.268695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.268731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.268940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.268975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.269211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.269249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.269454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.269490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.269679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.269714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.270018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.270053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.270214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.270418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.270452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.270633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.270668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.270874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.270909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.271175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.271212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.271347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.271380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.271523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.271559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.271749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.271784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.271997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.272238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.272459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.272494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.272619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.272653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.272843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.342 [2024-12-11 15:08:19.272878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.342 qpair failed and we were unable to recover it. 00:27:26.342 [2024-12-11 15:08:19.273215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.273252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.273474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.273510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.273715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.273750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.273938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.273974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.274174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.274211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.274393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.274429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.274569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.274608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.274722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.274755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.275026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.275061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.275251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.275286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.275476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.275510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.275631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.275666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.275906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.276020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.276056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.276257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.276293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.276496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.276532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.276661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.276696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.276889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.276924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.277209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.277244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.277400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.277437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.277658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.277693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.277903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.277938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.278194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.278230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.278420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.278456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.278708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.278742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.278871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.278907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.279197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.279237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.279426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.279460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.279711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.279747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.280057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.280092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.280331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.280367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.280502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.280538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.280672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.280706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.280906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.280940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.343 [2024-12-11 15:08:19.281173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.343 [2024-12-11 15:08:19.281208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.343 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.281373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.281407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.281610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.281643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.281991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.282027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.282292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.282329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.282585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.282620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.282749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.282780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.282980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.283016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.283149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.283194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.283403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.283437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.283641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.283675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.283805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.283836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.284115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.284150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.284378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.284414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.284542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.284575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.284764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.284799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.285022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.285057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.285261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.285297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.285477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.285510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.285689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.285723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.285936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.285971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.286224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.286260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.286588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.286623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.286808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.286990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.287024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.287290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.287325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.287547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.287581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.287707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.287739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.287927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.287961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.288141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.288188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.288345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.288379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.288511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.288547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.288755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.288790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.288933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.288969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.289209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.289244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.289427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.289460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.344 [2024-12-11 15:08:19.289639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.344 [2024-12-11 15:08:19.289674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.344 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.289886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.289920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.290065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.290100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.290293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.290328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.290451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.290490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.290607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.290641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.290832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.290867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.291050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.291085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.291321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.291357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.291490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.291525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.291670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.291705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.291912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.291947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.292066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.292100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.292303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.292338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.292526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.292561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.292786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.292822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.292968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.293001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.293209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.293246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.293530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.293565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.293722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.293758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.293979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.294128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.294307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.294545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.294710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.294868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.294904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.295090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.295124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.295287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.295323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.295462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.295497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.295696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.295730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.295858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.295892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.296019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.296058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.296228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.296369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.296403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.296556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.296589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.345 qpair failed and we were unable to recover it. 00:27:26.345 [2024-12-11 15:08:19.296704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.345 [2024-12-11 15:08:19.296739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.296931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.296966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.297181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.297219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.297367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.297401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.297542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.297576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.297718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.297752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.298027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.298231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.298268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.298414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.298447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.298584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.298617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.298869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.298980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.299015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.299213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.299251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.299469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.299504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.299701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.299736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.299931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.299965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.300181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.300216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.300496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.300533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.300680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.300715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.300899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.300935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.301198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.301236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.301556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.301763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.301798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.302930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.302964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.303146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.303217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.303358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.303392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.346 [2024-12-11 15:08:19.303533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.346 [2024-12-11 15:08:19.303569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.346 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.303705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.303739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.303869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.303903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.304186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.304223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.304410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.304445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.304577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.304612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.304730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.304764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.304886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.304919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.305209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.305245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.305413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.305448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.305624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.305659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.305896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.305932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.306114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.306147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.306405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.306440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.306755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.306957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.306992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.307207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.307243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.307499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.307687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.307721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.307921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.307956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.308180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.308217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.308349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.308382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.308589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.308624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.308828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.308863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.309072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.309106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.309433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.309469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.309753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.309787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.310064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.310098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.310304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.310341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.310551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.310586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.310770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.310806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.311016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.311051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.311182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.311218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.311352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.311392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.347 [2024-12-11 15:08:19.311572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.347 [2024-12-11 15:08:19.311606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.347 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.311804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.311838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.312114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.312149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.312457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.312492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.312737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.312772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.312967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.313001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.313227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.313263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.313386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.313420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.313551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.313585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.313804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.313839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.314055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.314088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.314232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.314267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.314477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.314512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.314654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.314691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.314884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.314919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.315107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.315141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.315415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.315450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.315706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.315918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.315953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.316133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.316183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.316317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.316352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.316528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.316563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.316697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.316732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.316878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.316913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.317106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.317140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.317362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.317396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.317529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.317570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.317696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.317730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.317873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.317907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.318027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.318061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.318261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.318297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.318485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.318520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.318719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.318753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.318886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.318921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.348 [2024-12-11 15:08:19.319103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.348 [2024-12-11 15:08:19.319138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.348 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.319277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.319312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.319590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.319624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.319749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.319783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.320041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.320075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.320215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.320250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.320411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.320446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.320573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.320607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.320816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.320852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.321040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.321076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.321292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.321328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.321474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.321510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.321733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.321768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.321948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.321983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.322114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.322148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.322361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.322396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.322528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.322563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.322742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.322777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.322986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.323021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.323199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.323247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.323511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.323545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.323689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.323726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.323919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.323952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.324083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.324115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.324291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.324328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.324515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.324549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.324776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.324810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.325021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.325057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.325190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.325227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.325368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.325403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.325682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.325718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.325983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.326019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.326214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.326250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.326481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.326516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.326653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.326690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.326894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.326929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.349 [2024-12-11 15:08:19.327041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.349 [2024-12-11 15:08:19.327077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.349 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.327290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.327327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.327532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.327566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.327856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.327892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.328077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.328113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.328370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.328407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.328548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.328583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.328785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.328820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.329118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.329153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.329322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.329356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.329480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.329514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.329730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.329764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.330025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.330058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.330257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.330294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.330425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.330460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.330647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.330681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.330889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.330925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.331105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.331138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.331395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.331431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.331625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.331658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.331917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.331952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.332077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.332111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.332328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.332366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.332479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.332514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.332802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.332837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.333019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.333053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.333271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.333441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.333474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.333671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.333902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.333935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.334210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.334247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.334430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.334465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.334596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.334630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.334799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.334835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.335046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.350 [2024-12-11 15:08:19.335340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.350 [2024-12-11 15:08:19.335377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.350 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.335607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.335641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.335872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.335908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.336116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.336151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.336290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.336324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.336592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.336628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.336833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.336867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.337057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.337090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.337308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.337343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.337477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.337511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.337734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.337932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.337967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.338173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.338211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.338409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.338442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.338573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.338607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.338926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.338960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.339079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.339372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.339421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.339578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.339626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.339830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.340024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.340272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.340309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.340511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.340544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.340677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.340710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.340981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.341016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.341230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.341268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.341483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.341518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.341666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.341702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.341833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.341867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.342054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.342102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.351 [2024-12-11 15:08:19.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.351 qpair failed and we were unable to recover it. 00:27:26.351 [2024-12-11 15:08:19.342650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.342686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.342813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.342848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.343910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.343949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.344077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.344123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.352 [2024-12-11 15:08:19.344426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.352 [2024-12-11 15:08:19.344518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.352 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.344738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.344799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.345181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.345249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.345518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.345620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.345819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.345884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.346052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.346091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.346341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.346385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.346522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.346801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.346853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.346994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.347030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.347342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.347395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.347605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.347658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.347793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.347832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.348008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.348048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.348248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.348297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.348454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.348506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.348718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.348764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.348989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.349029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.349215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.349254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.349476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.349515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.349701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.349735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.349883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.349919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.350125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.350180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.350325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.350360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.350657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.350692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.350814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.632 [2024-12-11 15:08:19.350848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.632 qpair failed and we were unable to recover it. 00:27:26.632 [2024-12-11 15:08:19.351066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.351102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.351332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.351368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.351568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.351602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.351801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.351838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.352043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.352078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.352289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.352326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.352443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.352477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.352660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.352695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.352824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.352858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.353044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.353284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.353320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.353511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.353545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.353760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.354020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.354055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.354299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.354463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.354497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.354725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.354760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.354873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.354914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.355109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.355144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.355292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.355326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.355533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.355568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.355824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.355859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.355982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.356197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.356373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.356531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.356688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.356931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.356966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.357077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.357113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.357267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.357441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.357474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.357692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.357726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.357937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.357972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.358095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.358128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.358277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.358329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.358502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.358538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.358772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.358807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.358948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.358982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.359188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.633 [2024-12-11 15:08:19.359223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.633 qpair failed and we were unable to recover it. 00:27:26.633 [2024-12-11 15:08:19.359477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.359510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.359663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.359697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.359918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.359953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.360148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.360195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.360403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.360442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.360628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.360672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.360895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.360929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.361053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.361086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.361284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.361320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.361597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.361632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.361901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.361937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.362242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.362279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.362551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.362586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.362770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.362806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.362987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.363024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.363254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.363290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.363546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.363581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.363802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.363838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.363966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.364001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.364198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.364234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.364403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.364541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.364575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.364767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.364803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.365107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.365142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.365397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.365431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.365632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.365666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.365955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.365990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.366201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.366237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.366362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.366397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.366525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.366559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.366696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.366729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.367019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.367053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.367326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.367367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.367496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.367532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.367750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.367785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.368046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.368081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.368289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.368324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.368541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.368576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.368711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.368746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.634 qpair failed and we were unable to recover it. 00:27:26.634 [2024-12-11 15:08:19.369025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.634 [2024-12-11 15:08:19.369060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.369272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.369308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.369496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.369622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.369656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.369955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.369990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.370223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.370259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.370467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.370501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.370628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.370663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.370931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.370966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.371083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.371117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.371312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.371348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.371625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.371660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.372007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.372042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.372234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.372270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.372410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.372445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.372630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.372862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.372898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.373019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.373054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.373235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.373272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.373577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.373612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.373742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.373783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.373913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.373947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.374150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.374199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.374466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.374501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.374758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.374793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.375023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.375057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.375198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.375234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.375441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.375476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.375728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.375763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.375948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.375982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.376179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.376215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.376346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.376380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.376506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.376542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.376821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.376855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.377134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.377183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.377384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.377420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.377674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.377708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.377916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.377951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.378077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.378113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.378330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.635 [2024-12-11 15:08:19.378366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.635 qpair failed and we were unable to recover it. 00:27:26.635 [2024-12-11 15:08:19.378481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.378516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.378735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.378770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.378951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.378986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.379294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.379335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.379491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.379526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.379787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.379822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.380022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.380057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.380243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.380280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.380413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.380448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.380668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.380703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.380890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.380926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.381212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.381246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.381436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.381471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.381727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.381762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.381875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.381907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.382030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.382065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.382258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.382294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.382540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.382752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.382786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.382994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.383028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.383233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.383270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.383479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.383560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.383867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.384054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.384089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.384222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.384255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.384396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.384431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.384612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.384647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.384926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.385149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.385194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.385361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.385396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.385651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.385684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.385915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.385951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.386167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.386202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.636 qpair failed and we were unable to recover it. 00:27:26.636 [2024-12-11 15:08:19.386383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.636 [2024-12-11 15:08:19.386418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.386544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.386589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.386723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.386759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.386966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.386999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.387255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.387291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.387439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.387473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.387606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.387641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.387844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.387879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.388177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.388214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.388424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.388458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.388661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.388695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.388897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.388931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.389055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.389090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.389298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.389334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.389568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.389602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.389739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.389775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.390052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.390086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.390226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.390262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.390377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.390411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.390603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.390638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.390860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.390895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.391005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.391040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.391199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.391235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.391373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.391408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.391593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.391628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.391835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.391870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.392066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.392101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.392315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.392349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.392704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.392787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.393007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.393048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.393195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.393233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.393438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.393474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.393626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.393661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.393880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.393915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.394128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.394177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.394302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.394338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.394552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.394586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.394795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.394830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.394955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.394990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.637 [2024-12-11 15:08:19.395212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.637 [2024-12-11 15:08:19.395249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.637 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.395365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.395401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.395534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.395567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.395842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.395878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.396010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.396044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.396283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.396320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.396457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.396492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.396617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.396652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.396946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.396981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.397212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.397248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.397381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.397416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.397546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.397581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.397821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.397856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.398037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.398073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.398218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.398253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.398391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.398425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.398554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.398595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.398823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.399048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.399082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.399269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.399321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.399523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.399560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.399762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.399796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.400060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.400101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.400333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.400387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.400565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.400607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.400896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.400931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.401116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.401289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.401325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.401452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.401482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.401638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.401672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.401818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.401855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.402085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.402120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.402272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.402308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.402489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.402524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.402786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.402821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.403027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.403061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.403328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.403365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.403484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.403521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.403704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.403740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.403862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.403898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.638 qpair failed and we were unable to recover it. 00:27:26.638 [2024-12-11 15:08:19.404102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.638 [2024-12-11 15:08:19.404138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.404362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.404397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.404621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.404656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.404887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.404928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.405123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.405180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.405460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.405494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.405627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.405662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.405787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.405823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.406038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.406072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.406313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.406349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.406605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.406639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.406772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.406807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.406989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.407024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.407209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.407245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.407410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.407444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.407637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.407671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.407871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.407906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.408110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.408145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.408343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.408378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.408537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.408572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.408758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.408792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.408997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.409032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.409309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.409345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.409553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.409589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.409735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.409771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.409952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.409987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.410135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.410181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.410377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.410412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.410539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.410574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.410724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.410760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.410945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.410981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.411295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.411332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.411525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.411560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.411843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.411878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.412086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.412121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.412270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.412306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.412597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.412632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.412934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.412968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.413262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.413298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.639 [2024-12-11 15:08:19.413506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.639 [2024-12-11 15:08:19.413540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.639 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.413719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.413753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.413900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.413935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.414139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.414183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.414317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.414352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.414556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.414591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.414888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.414923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.415106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.415141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.415365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.415401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.415602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.415636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.415817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.415851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.415963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.415998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.416117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.416149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.416360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.416394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.416535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.416570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.416749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.416975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.417011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.417251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.417289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.417428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.417664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.417699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.418060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.418094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.418373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.418410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.418613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.418649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.418775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.418809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.418933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.418968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.419220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.419257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.419562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.419596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.419836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.419870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.420060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.420094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.420293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.420329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.420607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.420641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.420777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.420812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.421016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.421057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.421325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.421362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.421569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.421605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.421800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.421833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.422114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.422148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.422345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.422380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.422647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.422682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.422976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.423011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.640 qpair failed and we were unable to recover it. 00:27:26.640 [2024-12-11 15:08:19.423198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.640 [2024-12-11 15:08:19.423234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.423425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.423459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.423608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.423643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.423868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.423907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.424175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.424211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.424476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.424510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.424742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.424777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.424914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.424948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.425204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.425240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.425447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.425483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.425630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.425665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.425879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.425913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.426151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.426204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.426315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.426351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.426565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.426599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.426806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.426841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.427047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.427084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.427284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.427320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.427527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.427561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.427682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.427724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.427910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.427943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.428122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.428169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.428375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.428410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.428539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.428573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.428796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.428831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.429029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.429063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.429204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.429240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.429368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.429402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.429535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.429570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.429767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.429802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.430081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.430116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.430252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.430287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.430399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.430434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.430647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.430683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.430908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.430942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.641 [2024-12-11 15:08:19.431137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.641 [2024-12-11 15:08:19.431189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.641 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.431318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.431350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.431545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.431579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.431857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.431891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.432105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.432139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.432273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.432308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.432580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.432614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.432802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.432836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.432950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.432985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.433123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.433173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.433309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.433345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.433481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.433520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.433655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.433690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.433874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.433909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.434208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.434245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.434461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.434495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.434612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.434647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.434874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.434908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.435089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.435123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.435277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.435311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.435567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.435602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.435727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.435762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.435974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.436008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.436314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.436349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.436555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.436590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.436788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14dab20 is same with the state(6) to be set 00:27:26.642 [2024-12-11 15:08:19.437031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.437111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.437367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.437446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.437808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.437907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.438215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.438259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.438524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.438560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.438849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.438884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.439197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.439235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.439351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.439383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.439504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.439538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.439670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.439704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.439889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.439924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.440107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.440141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.440369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.440404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.440549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.440582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.642 [2024-12-11 15:08:19.440697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.642 [2024-12-11 15:08:19.440730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.642 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.440957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.440993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.441195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.441230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.441414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.441449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.441583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.441617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.441774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.441808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.441986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.442019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.442214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.442248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.442452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.442487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.442764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.442797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.442976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.443011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.443192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.443227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.443409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.443449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.443632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.443666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.443869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.443904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.444177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.444212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.444464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.444498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.444685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.444719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.445003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.445036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.445261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.445295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.445550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.445585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.445708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.445742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.446047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.446264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.446422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.446587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.446773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.446986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.447021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.447208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.447243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.447365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.447398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.447509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.447540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.447826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.447861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.448916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.448949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.449129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.449173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.643 qpair failed and we were unable to recover it. 00:27:26.643 [2024-12-11 15:08:19.449380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.643 [2024-12-11 15:08:19.449416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.449597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.449631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.449820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.449855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.450065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.450372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.450683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.450835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.450973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.451007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.451196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.451232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.451452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.451487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.451594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.451627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.451854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.451889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.452016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.452055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.452255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.452291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.452483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.452516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.452708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.452742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.452929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.452965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.453146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.453202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.453348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.453383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.453590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.453624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.453812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.453846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.454027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.454062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.454211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.454246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.454358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.454392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.454616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.454650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.454837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.454872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.455084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.455119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.455328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.455364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.455568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.455787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.455821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.456099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.456134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.456327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.456363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.456497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.456531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.456741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.456774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.456969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.457003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.457127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.457193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.457401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.457435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.457618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.457651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.644 [2024-12-11 15:08:19.457783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.644 [2024-12-11 15:08:19.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.644 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.458033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.458067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.458251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.458287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.458479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.458512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.458769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.458802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.458979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.459014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.459227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.459263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.459388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.459422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.459612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.459646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.459801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.459835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.460089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.460123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.460270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.460305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.460521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.460555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.460727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.460906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.460947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.461127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.461170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.461348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.461382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.461566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.461600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.461913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.461948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.462146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.462343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.462377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.462501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.462535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.462664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.462700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.462930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.462966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.463221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.463257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.463404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.463439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.463669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.463704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.463815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.463848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.464033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.464068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.464285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.464319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.464442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.464476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.464621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.464655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.464776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.645 [2024-12-11 15:08:19.464811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.645 qpair failed and we were unable to recover it. 00:27:26.645 [2024-12-11 15:08:19.465094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.465128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.465310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.465567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.465601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.465866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.466127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.466169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.466382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.466416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.466614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.466647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.466857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.466892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.467078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.467112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.467314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.467350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.467571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.467604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.467737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.467773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.468042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.468076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.468275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.468310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.468494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.468529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.468660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.468694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.468911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.468946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.469126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.469167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.469348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.469383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.469569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.469602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.469807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.469840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.470016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.470056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.470269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.470304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.470433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.470467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.470592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.470626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.470913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.470949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.471167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.471203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.471414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.471447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.471584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.471618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.471736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.471770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.472026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.472060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.472191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.472225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.472384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.472418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.472603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.472638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.472864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.472899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.473040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.473075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.473259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.473296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.473508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.473542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.473678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.646 [2024-12-11 15:08:19.473714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.646 qpair failed and we were unable to recover it. 00:27:26.646 [2024-12-11 15:08:19.473914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.473947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.474254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.474289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.474472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.474507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.474758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.474793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.474972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.475006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.475223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.475259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.475502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.475713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.475746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.475943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.475978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.476113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.476147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.476292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.476326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.476556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.476590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.476700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.476736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.476936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.476970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.477167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.477202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.477458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.477493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.477680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.477714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.477909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.477944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.478239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.478274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.478458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.478492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.478769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.478802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.479019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.479053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.479193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.479235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.479446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.479481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.479618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.479653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.479953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.479987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.480289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.480584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.480617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.480826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.480860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.481044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.481079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.481359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.481395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.481530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.481565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.481706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.481741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.481868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.481901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.482105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.482139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.482370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.482406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.482672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.482707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.482888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.482922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.483101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.483135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.647 qpair failed and we were unable to recover it. 00:27:26.647 [2024-12-11 15:08:19.483325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.647 [2024-12-11 15:08:19.483360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.483470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.483504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.483687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.483721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.484062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.484096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.484344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.484379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.484509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.484543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.484848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.484882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.485016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.485050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.485252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.485288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.485414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.485449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.485639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.485674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.485878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.485912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.486202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.486237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.486454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.486490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.486635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.486669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.486878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.486912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.487147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.487270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.487304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.487436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.487471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.487728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.487762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.487952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.487986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.488178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.488215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.488333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.488561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.488600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.488806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.488840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.489044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.489080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.489261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.489296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.489501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.489535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.489657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.489692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.489927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.489961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.490255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.490290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.490426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.490460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.490663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.490697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.490935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.490969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.491170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.491206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.491507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.491542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.491748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.491782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.492004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.492039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.492221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.492255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.648 qpair failed and we were unable to recover it. 00:27:26.648 [2024-12-11 15:08:19.492530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.648 [2024-12-11 15:08:19.492563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.492687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.492720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.492902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.492936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.493121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.493156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.493277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.493312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.493494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.493528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.493716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.493751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.493951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.493985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.494193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.494228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.494370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.494403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.494584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.494618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.494851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.494887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.495071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.495106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.495328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.495638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.495672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.495901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.495934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.496140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.496183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.496444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.496478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.496658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.496692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.496957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.496991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.497199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.497235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.497512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.497547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.497860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.497894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.498180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.498216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.498423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.498463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.498621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.498776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.498811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.498960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.498993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.499201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.499236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.499362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.499397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.499533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.499567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.499763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.499797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.500087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.500340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.500376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.500562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.500594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.500714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.500749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.500929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.500963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.501195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.501230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.501490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.649 [2024-12-11 15:08:19.501525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.649 qpair failed and we were unable to recover it. 00:27:26.649 [2024-12-11 15:08:19.501710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.501744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.501873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.502136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.502193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.502320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.502352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.502467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.502501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.502657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.502691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.502907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.502941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.503136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.503182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.503317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.503352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.503572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.503605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.503715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.503748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.503946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.503980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.504128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.504175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.504402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.504435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.504571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.504606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.504805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.504839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.505099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.505133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.505332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.505368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.505491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.505525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.505720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.505755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.505965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.505999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.506255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.506289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.506412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.506447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.506581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.506615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.506847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.506881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.506997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.507035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.507236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.507271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.507525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.507560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.507794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.507828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.508013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.508048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.508237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.508272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.508544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.508579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.508872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.508906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.509023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.509057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.509258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.650 [2024-12-11 15:08:19.509294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.650 qpair failed and we were unable to recover it. 00:27:26.650 [2024-12-11 15:08:19.509501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.509535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.509718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.509752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.509957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.509991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.510281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.510317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.510556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.510592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.510702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.510733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.510947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.510981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.511111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.511147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.511291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.511325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.511532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.511568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.511785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.511819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.512013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.512048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.512257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.512292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.512478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.512512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.512693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.512727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.512841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.512873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.513147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.513189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.513510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.513724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.513759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.514061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.514096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.514297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.514331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.514456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.514489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.514643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.514677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.514817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.514851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.515073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.515106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.515230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.515266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.515401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.515436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.515617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.515651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.515837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.515871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.516171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.516206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.516467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.516506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.516774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.516808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.517037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.517071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.517255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.517291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.517491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.517525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.517750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.517859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.517890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.518070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.518103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.518314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.651 [2024-12-11 15:08:19.518352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.651 qpair failed and we were unable to recover it. 00:27:26.651 [2024-12-11 15:08:19.518480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.518513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.518671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.518790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.518823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.519020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.519054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.519334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.519371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.519497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.519531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.519748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.519782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.519979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.520014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.520198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.520233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.520429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.520463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.520601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.520635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.520858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.520891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.521086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.521119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.521342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.521378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.521498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.521532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.521664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.521698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.521902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.521937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.522118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.522153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.522323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.522358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.522583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.522618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.522760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.522794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.523010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.523046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.523180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.523213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.523474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.523507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.523716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.523752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.523964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.523997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.524136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.524181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.524392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.524425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.524573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.524607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.524852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.524886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.525067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.525101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.525368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.525411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.525611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.525646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.525830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.525865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.526119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.526153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.526291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.526325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.526519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.526553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.526686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.526720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.526857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.652 [2024-12-11 15:08:19.526891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.652 qpair failed and we were unable to recover it. 00:27:26.652 [2024-12-11 15:08:19.527010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.527045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.527246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.527283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.527487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.527522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.527653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.527687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.527980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.528015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.528202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.528239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.528477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.528511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.528722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.528757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.529046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.529080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.529354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.529388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.529518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.529554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.529749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.529783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.530059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.530093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.530300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.530335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.530477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.530510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.530706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.530740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.530930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.530963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.531243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.531278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.531490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.531524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.531650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.531683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.531807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.531842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.532119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.532152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.532376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.532411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.532620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.532653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.532959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.532993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.533128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.533169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.533379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.533413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.533546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.533579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.533864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.533899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.534055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.534089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.534278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.534313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.534463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.534497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.534714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.534754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.535044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.535229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.535390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.535586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.535739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.535974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.653 [2024-12-11 15:08:19.536008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.653 qpair failed and we were unable to recover it. 00:27:26.653 [2024-12-11 15:08:19.536185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.536221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.536428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.536461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.536669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.536703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.536982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.537017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.537291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.537325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.537591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.537625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.537833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.537868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.538006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.538325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.538360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.538645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.538679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.539014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.539047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.539206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.539240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.539430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.539464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.539672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.539706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.539852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.539887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.540065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.540098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.540348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.540383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.540512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.540545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.540748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.540782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.540891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.540924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.541180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.541262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.541483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.541524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.541668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.541704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.541903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.541938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.542170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.542207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.542414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.542448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.542587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.542621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.542748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.542784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.542989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.543024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.543240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.543277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.543418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.543454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.543586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.543621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.543747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.543783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.543988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.544023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.544228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.654 [2024-12-11 15:08:19.544265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.654 qpair failed and we were unable to recover it. 00:27:26.654 [2024-12-11 15:08:19.544548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.544584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.544836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.544871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.545061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.545095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.545375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.545411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.545727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.545762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.545965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.546000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.546299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.546335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.546533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.546750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.546784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.546987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.547022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.547282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.547319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.547528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.547563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.547861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.547909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.548038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.548073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.548345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.548382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.548509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.548544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.548655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.548688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.548883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.549121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.549156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.549389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.549425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.549567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.549602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.549718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.549754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.549937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.549971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.550277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.550313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.550523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.550559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.550811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.551013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.551047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.551229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.551265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.551445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.551481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.551784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.551820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.552021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.552057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.552219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.552254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.552406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.552442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.552653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.552690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.552891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.552926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.553108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.553143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.553365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.553401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.553521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.553557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.553794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.553830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.655 [2024-12-11 15:08:19.554045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.655 [2024-12-11 15:08:19.554087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.655 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.554214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.554250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.554455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.554490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.554610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.554644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.554809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.554936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.554971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.555211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.555332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.555367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.555491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.555526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.555659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.555695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.555903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.555938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.556209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.556246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.556524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.556558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.556678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.556713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.556900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.556934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.557155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.557199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.557394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.557429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.557572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.557608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.557809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.557845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.558141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.558185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.558316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.558351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.558474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.558509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.558768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.558803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.559007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.559042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.559229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.559265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.559398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.559433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.559647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.559684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.559809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.559845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.560071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.560106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.560353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.560389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.560588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.560623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.560891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.560927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.561131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.561184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.561317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.561352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.561558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.561593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.561705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.561740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.561876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.561912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.562121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.562173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.562360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.562396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.562674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.656 [2024-12-11 15:08:19.562710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.656 qpair failed and we were unable to recover it. 00:27:26.656 [2024-12-11 15:08:19.562914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.562949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.563184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.563220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.563450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.563650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.563685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.563924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.563960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.564180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.564217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.564360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.564395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.564612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.564646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.564780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.564815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.565006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.565041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.565269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.565305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.565442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.565478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.565618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.565652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.565862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.565898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.566220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.566452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.566489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.566734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.566851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.566887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.567090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.567126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.567277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.567314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.567441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.567476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.567608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.567643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.567855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.567890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.568078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.568278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.568336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.568538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.568573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.568700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.568734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.569022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.569057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.569235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.569278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.569408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.569443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.569637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.569671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.570041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.570077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.570358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.570395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.570549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.570584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.570765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.570799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.570944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.570979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.571197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.571233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.571392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.571427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.571608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.571642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.657 qpair failed and we were unable to recover it. 00:27:26.657 [2024-12-11 15:08:19.571778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.657 [2024-12-11 15:08:19.571815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.572094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.572129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.572355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.572391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.572580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.572614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.572802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.572837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.573016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.573050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.573210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.573247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.573496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.573715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.573751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.574031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.574066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.574297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.574457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.574491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.574698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.574733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.575021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.575056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.575283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.575421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.575458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.575592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.575633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.575865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.575900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.576069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.576105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.576309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.576345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.576503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.576538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.576715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.576749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.576972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.577007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.577193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.577230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.577485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.577519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.577630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.577666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.577793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.577827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.578034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.578069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.578311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.578347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.578473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.578507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.578802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.578836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.579093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.579128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.579363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.579398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.579533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.579568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.579835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.579870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.580109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.580143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.580283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.580318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.658 [2024-12-11 15:08:19.580525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.658 [2024-12-11 15:08:19.580560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.658 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.580684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.580719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.580996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.581031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.581212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.581248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.581441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.581477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.581772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.581808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.582029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.582070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.582297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.582334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.582449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.582484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.582789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.582823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.582958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.582992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.583289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.583326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.583466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.583501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.583724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.583759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.584026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.584063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.584262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.584298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.584426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.584461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.584656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.584691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.584801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.584835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.585054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.585088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.585306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.585343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.585577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.585613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.585833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.585868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.586000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.586036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.586218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.586255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.586467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.586502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.586760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.586795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.586991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.587025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.587156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.587204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.587360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.587395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.587607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.587643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.587829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.587865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.588116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.588151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.588285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.588321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.588450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.588486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.588694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.588728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.588858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.588892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.589091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.589125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.659 qpair failed and we were unable to recover it. 00:27:26.659 [2024-12-11 15:08:19.589343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.659 [2024-12-11 15:08:19.589421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.589578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.589618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.589927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.589963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.590198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.590234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.590371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.590406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.590606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.590642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.590834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.590868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.591124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.591170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.591321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.591356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.591527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.591564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.591687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.591721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.591923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.591958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.592146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.592194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.592404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.592621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.592657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.592956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.592991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.593102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.593138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.593372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.593407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.593616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.593652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.593936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.593971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.594153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.594202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.594339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.594374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.594570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.594612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.594907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.595181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.595219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.595423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.595458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.595628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.595764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.595799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.596058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.596093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.596228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.596263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.596523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.596560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.596694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.596729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.596848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.596883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.597200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.597236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.597451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.597487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.597621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.597656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.597796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.597833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.598035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.598071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.598304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.598341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.660 [2024-12-11 15:08:19.598472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.660 [2024-12-11 15:08:19.598506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.660 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.598657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.598692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.598843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.598879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.599110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.599145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.599339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.599376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.599489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.599522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.599706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.599742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.599876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.599912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.600099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.600134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.600349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.600385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.600580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.600616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.600849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.600884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.601025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.601061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.601274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.601311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.601444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.601661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.601696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.601852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.602124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.602167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.602302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.602337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.602465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.602500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.602632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.602667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.602892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.602928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.603214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.603250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.603374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.603416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.603539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.603573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.603722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.603922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.603958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.604151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.604201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.604316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.604351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.604486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.604521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.604637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.604670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.604891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.604926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.605109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.605144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.605439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.605475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.605665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.605699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.605959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.605994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.606202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.606239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.606393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.606428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.606550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.606585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.661 [2024-12-11 15:08:19.606822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.661 [2024-12-11 15:08:19.606857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.661 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.607041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.607076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.607219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.607255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.607540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.607576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.607812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.607847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.608115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.608150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.608444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.608481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.608666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.608700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.608982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.609017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.609305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.609342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.609467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.609503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.609765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.609802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.610000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.610036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.610227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.610263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.610522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.610557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.610766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.610802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.610983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.611019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.611304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.611341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.611553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.611589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.611732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.611767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.611985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.612021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.612310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.612350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.612491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.612528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.612667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.612702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.612901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.612943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.613129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.613175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.613326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.613361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.613550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.613584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.613804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.613839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.614031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.614066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.614250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.614286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.614468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.614503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.614682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.614716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.614839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.614875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.615001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.615036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.615235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.615270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.615478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.615514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.615722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.615757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.616076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.616112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.662 [2024-12-11 15:08:19.616315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.662 [2024-12-11 15:08:19.616350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.662 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.616555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.616589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.616859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.616893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.617020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.617055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.617252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.617287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.617490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.617526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.617651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.617685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.617795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.617830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.618012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.618047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.618300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.618336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.618545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.618580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.618881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.618916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.619200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.619238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.619448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.619483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.619665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.619700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.619881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.619917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.620211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.620470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.620504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.620722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.620756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.620937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.620971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.621192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.621228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.621411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.621446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.621581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.621616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.621802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.621836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.621980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.622015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.622211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.622254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.622439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.622474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.622595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.622630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.622861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.622896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.623040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.623075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.623232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.623268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.623412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.623448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.623657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.623692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.623914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.623950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.624153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.624216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.663 [2024-12-11 15:08:19.624474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.663 [2024-12-11 15:08:19.624510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.663 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.624724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.624759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.624887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.624922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.625190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.625227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.625421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.625456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.625602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.625637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.625893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.625929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.626133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.626193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.626450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.626485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.626620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.626654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.626934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.626969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.627091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.627125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.627261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.627297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.627550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.627586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.627722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.627757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.627952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.627987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.628188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.628225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.628357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.628393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.628694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.628729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.628866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.628901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.629082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.629117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.629319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.629356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.629535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.629570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.629679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.629895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.629929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.630110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.630145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.630285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.630322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.630432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.630467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.630657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.630691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.630834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.630869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.631080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.631121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.631352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.631388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.631522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.631556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.631738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.631773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.632055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.632090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.632290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.632325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.632533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.632764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.632798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.632981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.633017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.664 [2024-12-11 15:08:19.633208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.664 [2024-12-11 15:08:19.633244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.664 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.633427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.633462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.633642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.633676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.633903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.633938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.634050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.634084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.634374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.634411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.634597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.634632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.634743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.634778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.634918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.634953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.635233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.635269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.635396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.635431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.635713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.635747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.636027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.636062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.636346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.636382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.636660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.636694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.636884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.636918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.637202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.637238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.637542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.637577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.637695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.637731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.637924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.637960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.638237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.638273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.638397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.638432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.638627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.638662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.638870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.638904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.639088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.639122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.639364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.639400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.639656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.639691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.639882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.639915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.640196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.640233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.640363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.640398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.640679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.640713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.640899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.640939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.641196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.641238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.641365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.641397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.641584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.641617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.641852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.641887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.642069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.642104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.642313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.642349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.642603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.642637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.665 [2024-12-11 15:08:19.642761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-12-11 15:08:19.642795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.665 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.643001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.643035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.643234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.643269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.643471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.643505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.643615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.643650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.643969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.644003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.644288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.644325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.644528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.644563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.644746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.644781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.644959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.644993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.645284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.645319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.645445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.645480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.645684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.645718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.645850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.645883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.646107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.646143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.646361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.646396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.646580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.646614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.646807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.646842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.646968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.647127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.647294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.647534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.647689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.647908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.647942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.648172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.648208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.648408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.648443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.648623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.648782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.648818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.649006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.649040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.649151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.649200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.649387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.649423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.649534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.649570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.649848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.649884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.650093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.650128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.650361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.650397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.650607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.650643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.650823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.650858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.651129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.651179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.651385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.651419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.666 [2024-12-11 15:08:19.651605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.666 [2024-12-11 15:08:19.651641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.666 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.651847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.651883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.652082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.652117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.652344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.652384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.652608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.652644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.652773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.652812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.653022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.653079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.653342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.653387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.653514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.653550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.653779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.653817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.653998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.654033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.654216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.654253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.654454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.654489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.654673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.654711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.654833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.654872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.655086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.655122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.655344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.655395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.655567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.655614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.655829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.655864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.656183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.656220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.656418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.656461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.656651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.656686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.656935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.656976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.657195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.657236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.667 qpair failed and we were unable to recover it. 00:27:26.667 [2024-12-11 15:08:19.657525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.667 [2024-12-11 15:08:19.657578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.657834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.657889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.658242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.658320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.658692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.658760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.659016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.659062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.659353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.659406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.945 [2024-12-11 15:08:19.659614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.945 [2024-12-11 15:08:19.659666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.945 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.659952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.659993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.660188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.660242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.660386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.660438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.660747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.660790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.661087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.661130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.661384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.661428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.661724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.661774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.661938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.661991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.662249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.662287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.662433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.662473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.662765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.662803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.663096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.663130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.663391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.663428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.663555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.663589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.663869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.663903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.664097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.664132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.664443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.664478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.664692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.664728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.664908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.664943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.665148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.665194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.665377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.665411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.665687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.665722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.665876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.666147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.666218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.666425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.666461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.666737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.666772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.666999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.667033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.667155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.667205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.667458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.667492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.667672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.667714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.667895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.667930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.668134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.668180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.668463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.668499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.668629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.668664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.668968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.669003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.669246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.669281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.669489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.669523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.669704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.946 [2024-12-11 15:08:19.669739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.946 qpair failed and we were unable to recover it. 00:27:26.946 [2024-12-11 15:08:19.669942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.669977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.670095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.670129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.670471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.670551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.670715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.670754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.670879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.670916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.671115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.671150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.671370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.671405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.671585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.671619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.671805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.671840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.672037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.672071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.672373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.672409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.672616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.672651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.672808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.673108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.673144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.673346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.673381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.673567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.673602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.673780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.673813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.674029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.674064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.674260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.674297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.674574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.674609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.674803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.674837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.675018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.675053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.675331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.675367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.675579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.675614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.675737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.675771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.675949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.675983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.676177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.676212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.676409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.676443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.676638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.676673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.676954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.677254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.677289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.677583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.677805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.677838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.678089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.678124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.678274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.678311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.678421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.678454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.678640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.678674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.678864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.678898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.679115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.947 [2024-12-11 15:08:19.679149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.947 qpair failed and we were unable to recover it. 00:27:26.947 [2024-12-11 15:08:19.679345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.679380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.679586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.679621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.679728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.679762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.680043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.680078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.680257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.680294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.680474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.680508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.680766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.680929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.680963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.681217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.681251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.681431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.681466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.681669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.681704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.681901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.681935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.682116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.682150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.682449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.682484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.682699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.682733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.682921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.682955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.683078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.683112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.683334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.683370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.683485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.683520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.683808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.683843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.684044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.684079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.684283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.684319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.684432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.684466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.684651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.684686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.684870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.684906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.685088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.685122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.685313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.685349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.685529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.685564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.685762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.685796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.685933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.685968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.686274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.686309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.686500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.686535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.686790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.686830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.687033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.687066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.687325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.687362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.687661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.687695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.687958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.687992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.688273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.688310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.948 [2024-12-11 15:08:19.688594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.948 [2024-12-11 15:08:19.688627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.948 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.688810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.688845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.689124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.689169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.689370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.689405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.689592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.689626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.689806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.689841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.690094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.690128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.690424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.690460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.690750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.690784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.691053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.691088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.691289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.691325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.691582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.691616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.691811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.691845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.691971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.692006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.692287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.692323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.692508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.692542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.692745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.692779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.692930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.692963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.693215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.693507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.693543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.693761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.693795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.693996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.694031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.694155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.694212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.694530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.694565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.694744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.694778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.695056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.695090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.695303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.695339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.695622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.695656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.695935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.695969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.696267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.696303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.696572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.696606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.696788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.696822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.697017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.697052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.697329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.697364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.697560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.697600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.697856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.697891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.698085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.698119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.698318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.698353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.698556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.698590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.698786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.949 [2024-12-11 15:08:19.698820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.949 qpair failed and we were unable to recover it. 00:27:26.949 [2024-12-11 15:08:19.699098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.699132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.699325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.699361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.699620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.699654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.699931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.699965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.700183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.700219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.700501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.700535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.700738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.700773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.701072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.701106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.701340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.701377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.701679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.701714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.701903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.701937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.702142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.702201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.702405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.702439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.702620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.702655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.702834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.702867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.703051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.703087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.703361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.703396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.703589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.703802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.703837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.703959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.703993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.704200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.704237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.704469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.704504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.704643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.704678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.704889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.704923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.705213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.705248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.705483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.705518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.705792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.705825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.706067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.706102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.706370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.706406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.706666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.706700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.706976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.707009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.707203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.707239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.707439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.950 [2024-12-11 15:08:19.707474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.950 qpair failed and we were unable to recover it. 00:27:26.950 [2024-12-11 15:08:19.707654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.707688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.707945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.707986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.708286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.708321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.708518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.708553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.708780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.708815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.709097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.709132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.709344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.709379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.709575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.709609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.709840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.709874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.709997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.710031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.710345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.710613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.710648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.710777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.710812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.711091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.711125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.711347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.711532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.711568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.711753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.711787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.712069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.712104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.712307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.712342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.712545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.712578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.712841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.712875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.713188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.713224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.713425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.713459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.713728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.713762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.714046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.714081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.714361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.714397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.714542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.714577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.714791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.714825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.715036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.715071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.715200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.715235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.715528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.715563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.715675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.715709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.715840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.715874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.716078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.716112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.716306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.716341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.716544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.716579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.716785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.716820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.717024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.717058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.717186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.951 [2024-12-11 15:08:19.717223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.951 qpair failed and we were unable to recover it. 00:27:26.951 [2024-12-11 15:08:19.717427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.717461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.717607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.717641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.717837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.717879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.718130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.718175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.718385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.718419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.718601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.718634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.718909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.718943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.719178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.719214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.719411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.719445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.719642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.719676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.719863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.719898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.720104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.720138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.720429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.720465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.720582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.720617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.720870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.720904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.721114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.721148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.721380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.721415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.721617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.721652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.721948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.721983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.722252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.722288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.722498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.722532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.722806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.722841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.723044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.723078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.723195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.723231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.723409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.723443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.723718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.723753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.724033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.724067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.724277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.724566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.724600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.724824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.725002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.725038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.725244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.725279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.725488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.725523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.725752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.725787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.725972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.726008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.726276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.726498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.726532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.726730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.726763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.726885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.952 [2024-12-11 15:08:19.726920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.952 qpair failed and we were unable to recover it. 00:27:26.952 [2024-12-11 15:08:19.727127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.727186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.727300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.727336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.727589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.727625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.727807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.727847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.728109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.728143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.728438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.728473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.728762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.728797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.729075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.729110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.729397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.729637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.729670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.729852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.729887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.730012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.730048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.730228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.730264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.730373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.730407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.730637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.730671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.730862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.730896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.731194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.731229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.731419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.731454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.731666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.731700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.731979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.732013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.732208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.732243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.732424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.732459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.732640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.732674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.732960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.732994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.733103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.733139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.733356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.733393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.733652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.733686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.733812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.733847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.734031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.734066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.734318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.734354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.734607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.734998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.735036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.735235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.735272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.735455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.735490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.735713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.953 [2024-12-11 15:08:19.735748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.953 qpair failed and we were unable to recover it. 00:27:26.953 [2024-12-11 15:08:19.735988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.736024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.736328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.736365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.736571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.736606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.736725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.736760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.736965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.737000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.737278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.737313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.737590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.737625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.737822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.737857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.738062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.738370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.738406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.738664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.738699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.738980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.739015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.739270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.739306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.739562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.739596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.739713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.739749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.739957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.739990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.740119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.740155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.740448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.740482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.740663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.740698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.740824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.740858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.741142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.741189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.741374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.741407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.741604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.741650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.741856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.741890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.742192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.742228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.742428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.742462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.742646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.742682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.742870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.742905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.743089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.743124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.743410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.743445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.743571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.743606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.743788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.743823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.954 [2024-12-11 15:08:19.743933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.954 [2024-12-11 15:08:19.743966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.954 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.744218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.744254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.744390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.744423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.744697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.744731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.744920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.744955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.745232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.745266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.745401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.745435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.745712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.745747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.746011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.746046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.746345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.746381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.746570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.746605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.746811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.746845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.746966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.747170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.747324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.747502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.747746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.747920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.748153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.748203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.748404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.748438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.748620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.748655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.748934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.748968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.749184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.749221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.749356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.749392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.749576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.749610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.749922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.749956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.750212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.750248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.750381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.750415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.750595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.750631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.750903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.750938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.751134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.751178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.751300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.751334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.751639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.751675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.751884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.751918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.752053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.955 [2024-12-11 15:08:19.752088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.955 qpair failed and we were unable to recover it. 00:27:26.955 [2024-12-11 15:08:19.752293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.752330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.752537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.752572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.752752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.752787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.753062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.753097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.753328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.753365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.753550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.753585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.753769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.753804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.754008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.754043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.754291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.754327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.754464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.754504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.754687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.754721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.755000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.755354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.755390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.755517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.755551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.755739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.755774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.755988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.756023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.756303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.756338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.756533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.756569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.756872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.756907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.757137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.757182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.757489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.757525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.757802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.757837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.758123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.758170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.758307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.758344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.758601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.758636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.758822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.758857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.759079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.759115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.759381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.759417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.759605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.759640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.759823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.759859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.760040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.760076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.760270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.760306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.760560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.760595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.760799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.760833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.761028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.761063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.761285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.761321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.761519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.956 [2024-12-11 15:08:19.761555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.956 qpair failed and we were unable to recover it. 00:27:26.956 [2024-12-11 15:08:19.761744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.761779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.761907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.761941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.762064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.762099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.762313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.762349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.762557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.762592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.762871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.762905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.763189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.763226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.763410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.763446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.763625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.763660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.763790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.763825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.764080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.764114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.764334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.764370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.764483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.764516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.764724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.764759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.764867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.764901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.765083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.765119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.765310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.765347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.765552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.765586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.765742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.765775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.766054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.766089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.766304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.766340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.766465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.766499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.766689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.766724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.766921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.766955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.767144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.767189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.767402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.767437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.767622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.767657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.767925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.767959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.768116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.768151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.768419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.768454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.768654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.768688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.768885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.768919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.769193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.769229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.769361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.769396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.769579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.769613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.769888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.769922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.770122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.957 [2024-12-11 15:08:19.770168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.957 qpair failed and we were unable to recover it. 00:27:26.957 [2024-12-11 15:08:19.770456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.770491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.770688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.770722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.770929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.770964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.771078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.771118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.771430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.771465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.771722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.771756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.772001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.772036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.772253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.772290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.772551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.772586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.772706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.772738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.772939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.772974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.773182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.773218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.773426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.773461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.773576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.773610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.773722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.773755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.774009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.774044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.774250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.774286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.774477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.774512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.774782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.774817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.774932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.774966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.775245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.775282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.775488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.775522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.775701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.775736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.775938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.775972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.776099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.776134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.776380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.776634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.776801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.776836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.776961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.776995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.777185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.777224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.777427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.777468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.777731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.777764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.777945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.777980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.778101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.778134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.778420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.778456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.778716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.958 [2024-12-11 15:08:19.778750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.958 qpair failed and we were unable to recover it. 00:27:26.958 [2024-12-11 15:08:19.779047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.779081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.779313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.779348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.779553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.779588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.779725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.779760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.779962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.779996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.780293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.780328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.780600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.780634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.780922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.780957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.781176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.781213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.781413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.781448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.781646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.781680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.781891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.781926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.782210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.782525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.782559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.782760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.782795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.782976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.783012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.783195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.783231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.783437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.783472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.783685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.783901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.783937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.784136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.784206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.784392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.784427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.784616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.784651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.784830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.784865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.785042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.785076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.785333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.785369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.785584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.785619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.785912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.785947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.786080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.786115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.786315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.959 [2024-12-11 15:08:19.786351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.959 qpair failed and we were unable to recover it. 00:27:26.959 [2024-12-11 15:08:19.786636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.786670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.786985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.787020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.787231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.787269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.787467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.787502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.787685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.787721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.787933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.787970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.788219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.788255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.788382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.788417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.788695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.788730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.788845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.788877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.789150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.789198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.789381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.789416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.789651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.789685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.789986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.790021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.790319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.790354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.790639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.790673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.790814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.790848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.791073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.791110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.791429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.791464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.791687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.791721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.792061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.792096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.792372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.792407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.792616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.792650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.792772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.792807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.793086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.793121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.793315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.793350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.793530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.793564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.793758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.793792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.794071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.794106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.794412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.794447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.794573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.794606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.794737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.794772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.794973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.795013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.795193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.795229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.795409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.795443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.795638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.795673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.795876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.960 [2024-12-11 15:08:19.795911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.960 qpair failed and we were unable to recover it. 00:27:26.960 [2024-12-11 15:08:19.796193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.796228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.796434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.796467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.796648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.796683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.796794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.796828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.797130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.797175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.797433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.797468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.797664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.797698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.797834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.797867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.797997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.798032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.798243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.798278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.798536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.798571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.798695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.798727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.798903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.798938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.799146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.799192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.799466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.799500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.799716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.799953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.799988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.800243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.800424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.800458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.800661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.800696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.800819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.800854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.801035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.801069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.801295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.801336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.801618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.801651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.801857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.801891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.802188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.802223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.802431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.802465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.802650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.802684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.802960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.802994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.803118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.803152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.803343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.803377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.803584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.803618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.803819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.803854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.804133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.804452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.804486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.804770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.804805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.805086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.961 [2024-12-11 15:08:19.805122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.961 qpair failed and we were unable to recover it. 00:27:26.961 [2024-12-11 15:08:19.805343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.805379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.805501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.805534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.805740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.805775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.805916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.805950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.806242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.806278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.806458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.806493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.806686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.806720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.806844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.806879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.807130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.807311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.807346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.807531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.807566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.807850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.807885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.808073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.808112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.808329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.808365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.808570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.808605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.808870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.808905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.809181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.809217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.809436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.809471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.809751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.809786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.810048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.810083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.810382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.810419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.810623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.810658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.810865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.810901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.811087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.811121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.811262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.811298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.811502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.811537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.811748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.811783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.812056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.812091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.812278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.812580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.812613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.812798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.812832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.813088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.813122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.813324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.813359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.813613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.813648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.813849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.813883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.814087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.962 [2024-12-11 15:08:19.814121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.962 qpair failed and we were unable to recover it. 00:27:26.962 [2024-12-11 15:08:19.814266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.814301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.814594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.814629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.814839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.814873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.815167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.815204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.815344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.815378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.815558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.815591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.815853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.815887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.816072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.816106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.816231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.816264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.816458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.816493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.816790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.816825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.817040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.817220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.817369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.817624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.817859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.817981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.818015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.818187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.818263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.818565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.818641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.818873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.818949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.819272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.819313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.819568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.819604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.819934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.819969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.820220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.820255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.820437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.820472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.820653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.820688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.820965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.821000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.821286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.821322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.821539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.821574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.821875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.821911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.822090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.822133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.822396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.822432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.822707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.822742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.823016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.823051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.823229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.823265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.963 qpair failed and we were unable to recover it. 00:27:26.963 [2024-12-11 15:08:19.823443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.963 [2024-12-11 15:08:19.823478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.823730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.823765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.823971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.824006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.824193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.824228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.824354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.824388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.824663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.824697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.824900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.824934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.825122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.825156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.825280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.825312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.825446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.825482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.825759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.825795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.826015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.826050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.826328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.826363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.826540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.826575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.826762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.826798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.826999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.827033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.827235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.827271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.827454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.827488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.827676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.827711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.827886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.827920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.828095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.828130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.828399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.828444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.828721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.828763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.828889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.828924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.829103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.829137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.829426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.829461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.829582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.829615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.829802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.829836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.830016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.830050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.830181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.830218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.830450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.830484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.830806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.830841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.964 [2024-12-11 15:08:19.830965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.964 [2024-12-11 15:08:19.831000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.964 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.831201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.831237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.831505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.831540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.831830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.831864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.832135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.832193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.832405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.832440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.832616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.832919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.832961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.833078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.833113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.833297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.833333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.833601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.833634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.833742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.833776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.833955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.833988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.834185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.834220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.834428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.834463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.834652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.834685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.834859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.834893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.835083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.835398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.835433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.835622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.835655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.835853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.835886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.836131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.836178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.836423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.836456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.836732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.836766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.836969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.837003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.837215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.837251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.837424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.837457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.837629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.837663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.837952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.837986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.838186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.838220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.838399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.838433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.838613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.838648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.838826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.838860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.839040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.839074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.839285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.839321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.839510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.839544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.839723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.965 [2024-12-11 15:08:19.839757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.965 qpair failed and we were unable to recover it. 00:27:26.965 [2024-12-11 15:08:19.839952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.839987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.840279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.840314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.840581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.840614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.840794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.840829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.840970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.841004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.841121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.841155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.841346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.841381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.841595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.841635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.841893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.841928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.842193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.842229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.842518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.842552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.842877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.842911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.843176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.843211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.843419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.843453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.843576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.843611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.843722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.843756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.843930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.843963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.844137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.844193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.844447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.844480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.844612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.844645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.844909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.844943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.845125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.845169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.845360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.845393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.845570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.845604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.845721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.845754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.845935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.845968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.846210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.846246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.846446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.846479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.846652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.846686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.846857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.846890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.847168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.847202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.847421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.847456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.847589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.847622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.847915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.847949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.848125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.848170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.848355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.966 [2024-12-11 15:08:19.848388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.966 qpair failed and we were unable to recover it. 00:27:26.966 [2024-12-11 15:08:19.848594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.848628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.848809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.848843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.849024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.849058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.849246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.849282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.849552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.849587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.849798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.849831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.850007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.850041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.850180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.850217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.850462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.850496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.850629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.850663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.850911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.850945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.851141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.851185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.851453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.851487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.851664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.851698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.851909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.851942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.852207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.852242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.852426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.852459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.852740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.852773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.853057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.853091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.853339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.853373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.853555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.853590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.853864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.853898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.854170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.854204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.854487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.854521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.854801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.854834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.855035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.855069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.855282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.855318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.855495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.855529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.855781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.855816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.855938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.855971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.856243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.856278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.856560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.856594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.856796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.856830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.857029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.857063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.857246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.857281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.857468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.857503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.857754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.967 [2024-12-11 15:08:19.857788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.967 qpair failed and we were unable to recover it. 00:27:26.967 [2024-12-11 15:08:19.858049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.858084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.858196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.858231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.858430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.858469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.858739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.858773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.858960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.858995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.859197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.859231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.859436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.859469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.859713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.859747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.859947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.859980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.860257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.860293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.860486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.860520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.860695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.860729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.860928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.860963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.861140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.861187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.861389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.861423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.861599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.861634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.861839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.861873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.862144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.862187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.862383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.862416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.862531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.862564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.862759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.862793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.862981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.863014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.863151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.863195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.863521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.863554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.863741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.863775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.864028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.864062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.864243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.864278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.864507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.864753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.864951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.864990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.865179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.865215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.865487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.865521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.865800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.865834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.866057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.866090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.866284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.866499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.968 [2024-12-11 15:08:19.866533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.968 qpair failed and we were unable to recover it. 00:27:26.968 [2024-12-11 15:08:19.866721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.866755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.866930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.866964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.867172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.867207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.867387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.867421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.867600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.867633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.867900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.867933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.868213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.868249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.868530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.868565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.868840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.868875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.869069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.869103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.869228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.869264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.869460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.869494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.869818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.869851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.869980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.870014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.870218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.870255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.870447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.870482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.870698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.870732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.871009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.871043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.871326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.871381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.871640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.871674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.871783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.871823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.871998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.872032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.872307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.872342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.872621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.872655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.872778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.872934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.872968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.873257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.873293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.873414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.873448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.873627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.873661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.873860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.969 [2024-12-11 15:08:19.873894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.969 qpair failed and we were unable to recover it. 00:27:26.969 [2024-12-11 15:08:19.874175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.874211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.874397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.874431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.874596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.874782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.874817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.875040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.875074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.875339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.875374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.875807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.875841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.876114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.876148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.876418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.876453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.876746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.876779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.876959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.876993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.877264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.877300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.877584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.877876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.877911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.878166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.878202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.878466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.878501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.878781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.878816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.879037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.879072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.879207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.879242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.879427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.879463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.879654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.879689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.879894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.879928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.880205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.880241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.880441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.880478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.880660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.880694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.880975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.881008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.881189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.881226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.881406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.881441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.881713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.881747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.881956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.881991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.882179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.882222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.882402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.882437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.882628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.882662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.882842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.882877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.970 [2024-12-11 15:08:19.883152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.970 [2024-12-11 15:08:19.883197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.970 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.883457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.883588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.883623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.883825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.883860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.884042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.884077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.884280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.884315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.884521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.884556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.884833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.884868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.885147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.885194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.885377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.885412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.885672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.885707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.886003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.886038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.886170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.886207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.886487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.886521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.886736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.886915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.886949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.887068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.887103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.887226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.887260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.887456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.887491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.887683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.887718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.887897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.887932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.888126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.888184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.888465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.888500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.888703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.888743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.888947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.888983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.889261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.889296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.889524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.889558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.889763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.889798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.890004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.890039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.890239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.890275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.890461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.890495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.890775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.890997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.891032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.891301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.891337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.891464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.891499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.891623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.971 [2024-12-11 15:08:19.891656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.971 qpair failed and we were unable to recover it. 00:27:26.971 [2024-12-11 15:08:19.891839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.891873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.892077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.892111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.892404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.892441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.892733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.892768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.892948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.892981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.893177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.893213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.893396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.893429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.893606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.893641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.893836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.893870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.894050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.894085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.894362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.894400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.894612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.894647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.894781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.894817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.894997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.895031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.895308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.895350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.895481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.895517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.895698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.895733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.895921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.895955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.896135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.896196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.896382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.896416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.896623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.896657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.896855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.896890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.897093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.897127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.897385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.897668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.897702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.897905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.897941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.898065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.898099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.898333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.898369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.898505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.898541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.898767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.898801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.898914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.898948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.899077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.899112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.899397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.899432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.899636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.899671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.972 qpair failed and we were unable to recover it. 00:27:26.972 [2024-12-11 15:08:19.899873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.972 [2024-12-11 15:08:19.899908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.900121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.900156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.900390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.900425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.900606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.900640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.900764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.900799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.901082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.901116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.901267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.901305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.901584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.901833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.901869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.902048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.902083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.902348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.902385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.902498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.902533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.902666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.902699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.902985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.903020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.903302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.903339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.903622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.903656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.903837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.903872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.904144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.904201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.904387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.904422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.904701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.904735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.905018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.905052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.905242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.905278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.905557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.905591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.905779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.905814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.906017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.906052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.906236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.906273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.906456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.906490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.906730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.906908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.907147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.907192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.907326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.907360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.907543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.907579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.907707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.907742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.907921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.907956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.908194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.908231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.908517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.973 [2024-12-11 15:08:19.908554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.973 qpair failed and we were unable to recover it. 00:27:26.973 [2024-12-11 15:08:19.908678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.908712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.908918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.908953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.909134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.909178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.909458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.909493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.909765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.909798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.910085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.910120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.910320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.910355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.910485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.910520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.910718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.910753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.910891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.910926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.911250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.911286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.911512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.911729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.911770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.912048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.912403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.912440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.912700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.912897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.912932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.913041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.913076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.913258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.913294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.913563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.913597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.913879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.913915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.914193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.914229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.914368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.914403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.914607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.914642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.914826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.914860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.914985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.915020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.915351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.915612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.915647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.915846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.915880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.916063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.916098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.974 [2024-12-11 15:08:19.916384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.974 [2024-12-11 15:08:19.916419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.974 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.916692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.916727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.916906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.916942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.917195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.917231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.917454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.917490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.917686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.917722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.917940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.917976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.918099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.918135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.918329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.918364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.918639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.918680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.918885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.918919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.919047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.919082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.919266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.919303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.919578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.919613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.919877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.919912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.920119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.920154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.920473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.920509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.920734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.920770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.920897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.920931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.921219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.921256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.921526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.921768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.921803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.921984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.922019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.922287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.922324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.922580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.922615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.922796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.922831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.923084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.923119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.923246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.923281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.923480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.923516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.923706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.923741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.923850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.923884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.924094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.924128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.924333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.924373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.924590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.924625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.924828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.924863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.975 [2024-12-11 15:08:19.924990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.975 [2024-12-11 15:08:19.925025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.975 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.925258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.925296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.925429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.925466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.925592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.925626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.925842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.925876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.926085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.926120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.926341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.926377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.926561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.926594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.926865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.926900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.927180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.927215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.927397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.927431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.927615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.927650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.927829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.927862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.928087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.928121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.928357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.928393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.928546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.928581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.928696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.928731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.928934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.928969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.929096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.929134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.929340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.929372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.929544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.929578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.929857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.929891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.930005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.930039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.930232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.930267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.930479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.930513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.930714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.930747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.930951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.930985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.931145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.931190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.931304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.931337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.931600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.931634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.931830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.931865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.931996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.932030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.932230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.932265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.932450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.932484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.932688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.932722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.932856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.932891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.933117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.933152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.933383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.976 [2024-12-11 15:08:19.933418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.976 qpair failed and we were unable to recover it. 00:27:26.976 [2024-12-11 15:08:19.933598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.933632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.933840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.933874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.933984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.934020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.934294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.934329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.934522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.934563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.934671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.934704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.934968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.935001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.935208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.935244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.935439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.935474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.935749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.935783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.935985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.936021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.936169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.936204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.936491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.936526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.936708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.936742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.937019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.937054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.937359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.937395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.937509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.937545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.937829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.937865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.937998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.938032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.938243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.938278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.938484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.938518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.938724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.938759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.939049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.939083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.939355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.939390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.939519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.939551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.939801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.939835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.940011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.940045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.940249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.940285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.940471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.940504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.940629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.940660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.940936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.940971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.941088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.941126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.941361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.941396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.941529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.941561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.941688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.941720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.941902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.941934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.942227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.942264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.942545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.977 [2024-12-11 15:08:19.942580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.977 qpair failed and we were unable to recover it. 00:27:26.977 [2024-12-11 15:08:19.942705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.942737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.943011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.943046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.943251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.943287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.943546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.943581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.943818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.943853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.944090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.944305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.944341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.944651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.944686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.944831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.944862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.945960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.945992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.946117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.946149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.946467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.946501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.946694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.946726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.946908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.946942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.947218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.947253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.947382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.947424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.947620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.947654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.947803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.947834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.947985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.948020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.948198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.948233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.948413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.948447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.948664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.948699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.948887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.948924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.949134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.949312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.949344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.949647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.949682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.949983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.950017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.950202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.950236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.950420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.950454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.950634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.950667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.950854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.950888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.951008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.951041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.951216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.978 [2024-12-11 15:08:19.951251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.978 qpair failed and we were unable to recover it. 00:27:26.978 [2024-12-11 15:08:19.951368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.951400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.951629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.951662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.951962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.951996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.952128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.952174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.952316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.952350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.952484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.952519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.952711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.952746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.952876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.952914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.953176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.953212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.953492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.953526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.953822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.953856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.954038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.954072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.954348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.954384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.954576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.954609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.954884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.954921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.955126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.955174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.955480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.955514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.955796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.955831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.956113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.956147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.956291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.956325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.956580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.956822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.956858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.957056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.957091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.957254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.957290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.957476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.957511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.957768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.957805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.958037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.958072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.958204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.958239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.958431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.958466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.958576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.958609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.958809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.958842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.979 [2024-12-11 15:08:19.959024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.979 [2024-12-11 15:08:19.959060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.979 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.959286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.959322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.959601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.959635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.959918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.959951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.960237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.960272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.960550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.960583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.960812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.960846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.961048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.961083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.961288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.961324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.961512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.961547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.961731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.961765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.961942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.961975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.962177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.962213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.962331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.962365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.962474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.962509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.962711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.962745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.962960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.962995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.963199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.963235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.963364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.963398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.963576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.963617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.963750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.963783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.963960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.963995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.964192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.964230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.964502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.964537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.964811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.964845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.965043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.965078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.965206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.965242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.965440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.965475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.965673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.965707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.965820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.965854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.966136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.966201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.966335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.966369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.966495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.966529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.966715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.966750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.967029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.967321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.980 [2024-12-11 15:08:19.967357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.980 qpair failed and we were unable to recover it. 00:27:26.980 [2024-12-11 15:08:19.967542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.967577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.967795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.967826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.968104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.968142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.968441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.968495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.968648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.968686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.968897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.968930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.969055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.969087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.969205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.969238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.969493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.969524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.969642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.969675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.969885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.969926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.970129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.970182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.970389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.970426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.970670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.970722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.971018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.971056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.971188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.971222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.971354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.971386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.971638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.971669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.971853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.972078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.972110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.972313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.972355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.972492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:26.981 [2024-12-11 15:08:19.972860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.981 [2024-12-11 15:08:19.972903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:26.981 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.973177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.973213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.973443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.973478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.973682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.973717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.974013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.974047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.974173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.974207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.974409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.974441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.974642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.974674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.974857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.974889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.975086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.975121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.975360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.975396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.975508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.975539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.975820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.264 [2024-12-11 15:08:19.975854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.264 qpair failed and we were unable to recover it. 00:27:27.264 [2024-12-11 15:08:19.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.976094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.976375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.976412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.976627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.976663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.976852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.976886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.977005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.977036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.977238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.977273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.977594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.977628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.977737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.977770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.977965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.977999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.978129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.978177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.978377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.978411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.978597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.978631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.978913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.978949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.979253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.979289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.979476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.979510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.979692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.979727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.979852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.979886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.980183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.980218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.980342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.980373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.980551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.980585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.980765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.980799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.980977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.981012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.981295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.981331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.981461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.981496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.981683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.981717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.981839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.981873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.982089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.982123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.982329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.982365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.982589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.982623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.982808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.982842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.983029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.983065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.983319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.983355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.983609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.983643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.983768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.983801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.265 [2024-12-11 15:08:19.984019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.265 [2024-12-11 15:08:19.984053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.265 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.984275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.984310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.984528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.984561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.984764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.984798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.985019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.985054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.985247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.985282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.985463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.985496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.985721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.985851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.985885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.986069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.986309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.986344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.986596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.986629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.986808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.986842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.987113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.987148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.987387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.987431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.987559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.987592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.987796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.987830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.987977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.988011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.988196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.988232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.988351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.988384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.988564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.988598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.988801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.988835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.989032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.989066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.989217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.989253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.989373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.989408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.989591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.989803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.989836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.990047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.990081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.990197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.990231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.990423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.990457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.990664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.990697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.990877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.990910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.991030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.991064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.991332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.991367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.991641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.991673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.991854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.991888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.266 qpair failed and we were unable to recover it. 00:27:27.266 [2024-12-11 15:08:19.992069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.266 [2024-12-11 15:08:19.992109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.992375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.992409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.992530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.992564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.992745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.992778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.992988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.993149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.993321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.993536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.993750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.993908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.993943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.994121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.994155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.994280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.994314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.994502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.994536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.994659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.994693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.994841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.994875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.995056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.995089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.995216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.995252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.995488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.995522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.995642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.995675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.995875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.995909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.996062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.996278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.996492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.996711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.996868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.996978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.997012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.997216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.997251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.997385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.997425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.997557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.997592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.997703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.997738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.997923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.998003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.998325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.998368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.998496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.998529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.998652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.998684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.267 [2024-12-11 15:08:19.998831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.267 qpair failed and we were unable to recover it. 00:27:27.267 [2024-12-11 15:08:19.999028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:19.999193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:19.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:19.999497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:19.999727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:19.999962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:19.999997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.000140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.000184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.000370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.000403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.000518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.000552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.000745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.000778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.001055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.001092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.001293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.001329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.001512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.001546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.001677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.001712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.001922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.001959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.002897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.002930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.003070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.003104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.003297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.003332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.003595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.003629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.003847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.003880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.004073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.004106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.004246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.004281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.004397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.004430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.004614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.004649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.004779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.004814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.005034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.005206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.005357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.005511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.268 [2024-12-11 15:08:20.005665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.268 qpair failed and we were unable to recover it. 00:27:27.268 [2024-12-11 15:08:20.005776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.005811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.005990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.006024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.006203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.006238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.006520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.006556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.006680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.006715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.006841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.006874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.007916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.007952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.008907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.008941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.009089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.009464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.009699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.009869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.009989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.010022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.010169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.010211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.010335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.010370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.010484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.269 [2024-12-11 15:08:20.010518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.269 qpair failed and we were unable to recover it. 00:27:27.269 [2024-12-11 15:08:20.010634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.010670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.010797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.010831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.010938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.010973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.011120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.011281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.011510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.011726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.011872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.011987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.012021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.012201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.012237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.012417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.012451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.012587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.012622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.012737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.012772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.012969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.013115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.013266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.013636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.013853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.013895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.014044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.014195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.014416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.014635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.014790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.014976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.015010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.015194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.015230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.015355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.015389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.015604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.015638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.015747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.015780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.015979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.016013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.016146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.016195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.016323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.016357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.016572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.270 qpair failed and we were unable to recover it. 00:27:27.270 [2024-12-11 15:08:20.016767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.270 [2024-12-11 15:08:20.016801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.016995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.017029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.017286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.017322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.017508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.017543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.017762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.017962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.017996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.018111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.018144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.018300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.018336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.018460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.018495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.018634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.018676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.018884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.018918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.019176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.019213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.019392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.019427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.019541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.019576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.019784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.019819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.020952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.020984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.021193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.021229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.021360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.021396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.021587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.021620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.021735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.021770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.021905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.021939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.022226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.022276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.022439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.022487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.022672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.022749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.023027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.023066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.023239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.023421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.023459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.023643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.023679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.023862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.271 [2024-12-11 15:08:20.023898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.271 qpair failed and we were unable to recover it. 00:27:27.271 [2024-12-11 15:08:20.024059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.024263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.024440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.024620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.024798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.024948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.024983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.025181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.025219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.025410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.025474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.025723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.025795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.025986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.026056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.026304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.026400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.026747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.026812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.027934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.027967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.028084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.028118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.028264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.028299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.028500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.028535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.028669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.028703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.028817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.028851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.029028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.029062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.029193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.029229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.029365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.029400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.029606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.029642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.029845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.029897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.030057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.030109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.030321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.030361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.030615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.030675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.030810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.030846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.030963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.030998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.272 [2024-12-11 15:08:20.031183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.272 [2024-12-11 15:08:20.031219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.272 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.031349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.031383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.031501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.031537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.031655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.031689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.031964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.031999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.032193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.032229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.032445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.032479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.032659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.032692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.032819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.032853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.032994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.033028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.033136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.033186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.033367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.033401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.033660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.033697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.033827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.033861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.034077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.034112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.034240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.034274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.034393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.034428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.034607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.034647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.034931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.034965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.035077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.035109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.035256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.035291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.035500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.035535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.035751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.035786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.035961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.035995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.036217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.036252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.036376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.036411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.036520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.036554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.036684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.036717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.036846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.036880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.037130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.037170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.037348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.037381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.037501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.037536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.037756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.037790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.037965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.037999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.038127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.038170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.273 qpair failed and we were unable to recover it. 00:27:27.273 [2024-12-11 15:08:20.038317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.273 [2024-12-11 15:08:20.038352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.038477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.038511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.038714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.038748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.038926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.038960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.039151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.039207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.039320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.039355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.039530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.039564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.039745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.039779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.039963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.039998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.040189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.040225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.040333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.040367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.040564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.040597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.040796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.040830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.041941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.041975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.042118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.042345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.042499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.042719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.042857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.042983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.043017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.043217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.043253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.043374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.043407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.043584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.043619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.043796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.274 [2024-12-11 15:08:20.043830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.274 qpair failed and we were unable to recover it. 00:27:27.274 [2024-12-11 15:08:20.044023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.044057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.044178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.044213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.044331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.044364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.044548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.044583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.044777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.044810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.045009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.045042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.045300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.045336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.045544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.045579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.045730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.045913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.045948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.046100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.046256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.046478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.046848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.046976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.047011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.047127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.047186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.047363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.047397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.047667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.047701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.047903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.047937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.048154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.048311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.048463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.048617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.048844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.048970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.049115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.049362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.049500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.049660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.049869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.049903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.050013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.050047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.050232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.050267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.050382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.050423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.050563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.050596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.275 qpair failed and we were unable to recover it. 00:27:27.275 [2024-12-11 15:08:20.050728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.275 [2024-12-11 15:08:20.050762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.050870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.050903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.051076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.051108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.051317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.051353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.051460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.051495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.051666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.051699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.051867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.051901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.052095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.052129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.052402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.052436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.052630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.052664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.052928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.052962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.053170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.053204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.053318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.053352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.053523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.053558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.053678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.053711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.053835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.053869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.054055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.054089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.054371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.054407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.054526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.054559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.054736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.054770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.055011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.055045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.055220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.055255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.055373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.055406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.055605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.055638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.055813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.055846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.056014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.056091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.056289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.056364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.056563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.056601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.056834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.056867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.057052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.057086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.057212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.057249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.057443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.057476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.057645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.057678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.057944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.276 [2024-12-11 15:08:20.057978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.276 qpair failed and we were unable to recover it. 00:27:27.276 [2024-12-11 15:08:20.058151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.058196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.058308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.058343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.058522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.058556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.058681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.058713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.058993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.059039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.059180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.059215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.059394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.059427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.059658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.059691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.059820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.059853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.060101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.060340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.060546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.060734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.060870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.060978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.061128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.061311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.061509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.061724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.061912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.061954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.062189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.062236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.062470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.062546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.062745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.062791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.062987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.063023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.063148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.063199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.063324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.063357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.063519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.063553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.063783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.063817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.063976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.277 [2024-12-11 15:08:20.064849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.277 qpair failed and we were unable to recover it. 00:27:27.277 [2024-12-11 15:08:20.064973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.065131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.065309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.065468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.065610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.065863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.065897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.066943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.066977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.067097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.067130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.067339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.067374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.067523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.067635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.067668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.067864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.067898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.068114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.068147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.068342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.068376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.068503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.068537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.068665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.068698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.068871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.068904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.069145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.069191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.069302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.069336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.069527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.069566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.069794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.069826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.069963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.069996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.070115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.070148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.070339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.070373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.070542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.070574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.070693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.070726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.070907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.070939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.071071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.071104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.071231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.071265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.071378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.071411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.278 qpair failed and we were unable to recover it. 00:27:27.278 [2024-12-11 15:08:20.071517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.278 [2024-12-11 15:08:20.071552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.071724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.071757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.071863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.071896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.072866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.072899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.073007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.073041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.073178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.073212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.073411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.073444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.073597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.073712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.073746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.074933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.074965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.075151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.075197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.075320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.075353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.075523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.075558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.075681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.075715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.075817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.075851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.076061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.076095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.076201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.076236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.076408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.076442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.076702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.076861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.076895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.077012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.279 [2024-12-11 15:08:20.077053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.279 qpair failed and we were unable to recover it. 00:27:27.279 [2024-12-11 15:08:20.077193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.077238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.077441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.077475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.077599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.077633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.077831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.077871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.077991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.078141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.078376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.078517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.078721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.078863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.078897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.079103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.079137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.079257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.079293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.079410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.079451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.079561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.079595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.079801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.079835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.080033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.080067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.080257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.080292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.080418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.080451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.080627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.080661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.080838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.080909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.081063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.081103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.081242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.081278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.081457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.081490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.081662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.081694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.081870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.081902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.082864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.082968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.083001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.083182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.083217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.083340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.280 [2024-12-11 15:08:20.083374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.280 qpair failed and we were unable to recover it. 00:27:27.280 [2024-12-11 15:08:20.083489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.083691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.083725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.083892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.083925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.084032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.084066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.084238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.084271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.084452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.084493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.084673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.084710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.084976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.085194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.085346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.085598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.085802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.085950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.085983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.086089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.086123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.086326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.086365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.086542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.086576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.086767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.086801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.086972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.087006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.087190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.087232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.087406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.087441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.087642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.087675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.087845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.087879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.087996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.088147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.088324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.088542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.088679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.088841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.088874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.089935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.089985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.090103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.281 [2024-12-11 15:08:20.090136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.281 qpair failed and we were unable to recover it. 00:27:27.281 [2024-12-11 15:08:20.090336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.090494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.090527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.090708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.090742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.090943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.090977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.091153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.091197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.091370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.091404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.091574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.091608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.091780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.091814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.091987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.092201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.092370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.092519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.092742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.092886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.092920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.093093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.093126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.093319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.093355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.093486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.093520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.093646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.093679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.093808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.094012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.094045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.094259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.094292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.094405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.094437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.094626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.094657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.094828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.094861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.095132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.095185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.095312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.095345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.095456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.095488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.095727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.095759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.095929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.095960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.096153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.096199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.096315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.096348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.096541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.096574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.096743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.096775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.096965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.096997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.097106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.282 [2024-12-11 15:08:20.097139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.282 qpair failed and we were unable to recover it. 00:27:27.282 [2024-12-11 15:08:20.097342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.097374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.097545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.097576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.097715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.097748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.097915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.097948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.098138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.098182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.098352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.098384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.098572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.098603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.098715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.098746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.098861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.098894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.099079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.099111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.099315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.099348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.099455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.099487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.099670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.099703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.099872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.099914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.100105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.100136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.100319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.100359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.100529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.100562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.100731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.100763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.100875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.100906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.101096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.101126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.101354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.101391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.101507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.101541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.101659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.101691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.101816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.101849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.102893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.103095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.103128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.103448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.103523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.103740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.103774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.103887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.283 [2024-12-11 15:08:20.103919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.283 qpair failed and we were unable to recover it. 00:27:27.283 [2024-12-11 15:08:20.104105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.104138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.104333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.104364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.104470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.104502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.104658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.104773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.104805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.104979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.105011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.105181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.105214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.105481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.105513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.105635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.105667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.105856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.105887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.106943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.106976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.107150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.107193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.107384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.107415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.107531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.107565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.107689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.107721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.107888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.107924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.108153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.108198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.108402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.108433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.108625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.108657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.108766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.108805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.108921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.108952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.284 [2024-12-11 15:08:20.109064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.284 [2024-12-11 15:08:20.109097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.284 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.109226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.109260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.109435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.109466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.109693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.109727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.109865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.109897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.110865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.110896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.111927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.111957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.112151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.112194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.112407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.112442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.112548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.112580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.112714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.112747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.112858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.112889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.113870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.113901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.114079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.114110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.114299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.114334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.285 [2024-12-11 15:08:20.114505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.285 [2024-12-11 15:08:20.114536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.285 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.114690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.114723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.114838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.114871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.115934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.115967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.116136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.116175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.116295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.116327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.116428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.116460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.116658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.116691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.116870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.116903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.117071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.117103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.117360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.117393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.117516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.117548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.117675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.117706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.117888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.117921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.118959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.118991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.119111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.119145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.119385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.119417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.119606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.119640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.119742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.119772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.120011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.120043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.120181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.120216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.120402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.120435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.120600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.120631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.120891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.120924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.286 [2024-12-11 15:08:20.121090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.286 [2024-12-11 15:08:20.121121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.286 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.121302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.121335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.121442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.121475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.121581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.121611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.121781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.121815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.121981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.122013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.122181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.122216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.122346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.122378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.122615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.122647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.122821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.122854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.122970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.123127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.123305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.123437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.123656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.123806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.123840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.124961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.124993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.125094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.125125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.125348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.125380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.125577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.125611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.125777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.125808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.125920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.125952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.126215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.126248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.126432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.126464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.126641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.126673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.126841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.126872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.126974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.127005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.127107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.127140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.127254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.127286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.127388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.127420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.287 [2024-12-11 15:08:20.127604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.287 [2024-12-11 15:08:20.127637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.287 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.127762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.127794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.128931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.128963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.129130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.129171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.129286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.129508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.129540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.129807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.129839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.130015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.130047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.130184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.130224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.130326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.130357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.130545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.130582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.130751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.130783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.131894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.131927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.132099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.132129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.132457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.132528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.132662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.132700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.132874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.132909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.133142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.133297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.133526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.133669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.133802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.133971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.134005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.134111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.134145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.134347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.134380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.288 [2024-12-11 15:08:20.134547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.288 [2024-12-11 15:08:20.134580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.288 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.134751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.134784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.134920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.134954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.135212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.135246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.135370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.135403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.135504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.135538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.135720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.135753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.135929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.135964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.136200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.136348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.136499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.136655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.136855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.136972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.137949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.137981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.138217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.138256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.138375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.138406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.138516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.138547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.138662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.138693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.138906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.138939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.139062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.139093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.289 [2024-12-11 15:08:20.139264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.289 [2024-12-11 15:08:20.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.289 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.139464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.139496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.139661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.139693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.139806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.139838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.139947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.139979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.140175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.140209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.140377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.140409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.140598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.140630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.140823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.140855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.141042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.141074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.141295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.141329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.141573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.141687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.141719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.141956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.141988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.142191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.142226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.142335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.142368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.142476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.142508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.142677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.142709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.142898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.142930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.143050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.143082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.143204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.143237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.143457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.143528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.143665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.143701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.143870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.143904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.144075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.144108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.144305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.144339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.144474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.144507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.144677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.144710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.144898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.144931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.145129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.290 [2024-12-11 15:08:20.145169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.290 qpair failed and we were unable to recover it. 00:27:27.290 [2024-12-11 15:08:20.145278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.145311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.145492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.145525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.145722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.145964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.145998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.146134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.146176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.146305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.146338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.146513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.146546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.146676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.146708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.146901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.146935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.147933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.147966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.148173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.148208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.148380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.148414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.148534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.148568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.148735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.148775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.148891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.148924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.149902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.150048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.150082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.150256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.150290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.150491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.150667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.150700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.150830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.150864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.151057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.151092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.151286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.151322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.151431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.151464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.291 [2024-12-11 15:08:20.151671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.291 [2024-12-11 15:08:20.151705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.291 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.151816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.151849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.151974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.152196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.152334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.152548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.152752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.152905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.152937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.153182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.153217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.153427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.153462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.153642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.153677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.153782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.153822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.153992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.154026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.154197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.154232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.154501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.154534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.154661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.154695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.154821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.154854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.154968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.155131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.155285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.155424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.155571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.155783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.155817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.156059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.156093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.156267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.156302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.156480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.156512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.156685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.156720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.156824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.156858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.157881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.157990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.158024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.292 [2024-12-11 15:08:20.158237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.292 qpair failed and we were unable to recover it. 00:27:27.292 [2024-12-11 15:08:20.158448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.158482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.158620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.158654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.158852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.158891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.159093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.159262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.159483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.159633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.159787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.159978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.160182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.160390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.160536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.160676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.160893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.160927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.161099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.161133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.161246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.161453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.161488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.161667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.161701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.161808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.161841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.162024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.162056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.162170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.162204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.162371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.162405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.162589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.162623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.162808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.162840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.163926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.163967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.164206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.164242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.164415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.164449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.164618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.164652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.164887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.164922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.165036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.165069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.293 qpair failed and we were unable to recover it. 00:27:27.293 [2024-12-11 15:08:20.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.293 [2024-12-11 15:08:20.165284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.165386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.165419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.165588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.165621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.165734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.165767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.165943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.165976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.166216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.166250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.166419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.166452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.166557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.166589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.166713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.166748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.166864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.166897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.167087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.167121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.167300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.167335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.167506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.167538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.167749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.167782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.167967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.168000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.168180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.168215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.168418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.168451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.168556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.168590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.168780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.168814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.168987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.169125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.169301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.169508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.169718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.169939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.169971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.170145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.170192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.170297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.170330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.170575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.170609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.170715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.170748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.170863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.170896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.171063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.171096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.171217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.171250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.171421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.171456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.171639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.171673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.171881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.171914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.172103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.172148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.172283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.294 [2024-12-11 15:08:20.172317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.294 qpair failed and we were unable to recover it. 00:27:27.294 [2024-12-11 15:08:20.172438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.172471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.172656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.172689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.172873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.172907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.173877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.173910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.174919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.174952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.175121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.175156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.175273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.175305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.175471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.175504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.175677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.175711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.175819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.175851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.176947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.176991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.177907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.177942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.178182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.178218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.178322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.178356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.295 [2024-12-11 15:08:20.178531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.295 [2024-12-11 15:08:20.178564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.295 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.178742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.178777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.178949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.178984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.179113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.179146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.179265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.179299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.179408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.179443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.179620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.179653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.179925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.179958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.180184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.180388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.180521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.180656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.180806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.180974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.181193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.181346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.181512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.181658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.181798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.181838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.182033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.182067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.182186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.182220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.182393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.182426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.182609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.182644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.182819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.182852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.183896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.183930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.184114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.184148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.184261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.184294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.184583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.184654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.184844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.184883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.185091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.185304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.185537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.185685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.185887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.185990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.296 [2024-12-11 15:08:20.186024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.296 qpair failed and we were unable to recover it. 00:27:27.296 [2024-12-11 15:08:20.186136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.186179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.186297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.186330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.186537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.186762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.186794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.186962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.186995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.187173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.187216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.187326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.187358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.187473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.187505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.187671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.187704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.187836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.187868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.188074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.188321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.188614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.188646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.188825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.188859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.189920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.189952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.190881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.190913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.191052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.191313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.191347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.191478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.191510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.191683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.191715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.191885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.191918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.192169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.192361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.192403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.192585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.192619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.192737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.192770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.192883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.192915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.297 qpair failed and we were unable to recover it. 00:27:27.297 [2024-12-11 15:08:20.193100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.297 [2024-12-11 15:08:20.193133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.193249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.193281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.193391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.193424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.193594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.193626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.193733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.193765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.193893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.193925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.194110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.194142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.194322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.194355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.194542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.194580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.194761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.194792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.194902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.194934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.195105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.195137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.195339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.195371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.195561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.195593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.195781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.195814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.196006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.196038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.196280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.196314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.196526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.196558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.196727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.196767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.196933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.196966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.197962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.197994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.198178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.198212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.198339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.198372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.198502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.198534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.198700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.198733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.198900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.199864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.199901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.200019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.200050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.200154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.298 [2024-12-11 15:08:20.200209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.298 qpair failed and we were unable to recover it. 00:27:27.298 [2024-12-11 15:08:20.200413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.200445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.200611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.200643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.200901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.200935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.201221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.201365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.201501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.201655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.201872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.201992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.202023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.202169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.202212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.202396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.202427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.202600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.202632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.202802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.202834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.202968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.203179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.203339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.203542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.203751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.203898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.203930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.204102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.204134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.204337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.204369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.204496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.204529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.204703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.204734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.204863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.204894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.205009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.205042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.205211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.205244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.205414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.205446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.205619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.205653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.205821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.205854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.206037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.206071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.206271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.206304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.206484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.206516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.206633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.206665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.206857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.206891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.207003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.299 [2024-12-11 15:08:20.207034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.299 qpair failed and we were unable to recover it. 00:27:27.299 [2024-12-11 15:08:20.207149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.207189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.207365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.207439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.207659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.207813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.207847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.207953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.207986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.208095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.208127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.208298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.208335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.208530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.208563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.208753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.208785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.208969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.209939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.209972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.210216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.210375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.210522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.210804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.210990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.211216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.211500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.211639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.211930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.211963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.212901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.212932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.213077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.213287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.213514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.213711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.300 [2024-12-11 15:08:20.213846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.300 qpair failed and we were unable to recover it. 00:27:27.300 [2024-12-11 15:08:20.213959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.214171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.214205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.214375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.214407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.214542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.214581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.214772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.214803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.214974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.215194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.215368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.215516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.215662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.215871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.215904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.216109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.216370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.216406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.216576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.216609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.216871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.216904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.217898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.217934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.218848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.218882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.219049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.219082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.219275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.219310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.219579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.219612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.219792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.219825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.219954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.219986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.220089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.220122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.220297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.220330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.220514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.220548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.220720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.220753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.220926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.220959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.221076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.221109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.221290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.221323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.301 qpair failed and we were unable to recover it. 00:27:27.301 [2024-12-11 15:08:20.221450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.301 [2024-12-11 15:08:20.221484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.221615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.221647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.221816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.221850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.221965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.221998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.222155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.222320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.222521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.222670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.222877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.222983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.223137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.223354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.223588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.223934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.223967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.224118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.224357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.224507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.224718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.224858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.224973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.225952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.225985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.226893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.226927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.227934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.227968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.302 [2024-12-11 15:08:20.228137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.302 [2024-12-11 15:08:20.228193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.302 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.228372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.228405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.228588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.228621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.228734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.228766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.228932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.228965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.229155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.229205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.229312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.229344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.229575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.229764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.229797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.229964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.229996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.230199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.230233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.230421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.230457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.230580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.230614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.230718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.230751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.230863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.230896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.231901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.231934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.232132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.232173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.232349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.232383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.232500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.232532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.232650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.232683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.232850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.232884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.233055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.233087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.233260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.233295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.233558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.233591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.233760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.233793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.233976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.234115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.234347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.234549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.234683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.234887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.234920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.235024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.235057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.235237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.235272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.235381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.235415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.303 [2024-12-11 15:08:20.235607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.303 [2024-12-11 15:08:20.235640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.303 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.235755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.235789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.235901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.235934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.236928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.236961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.237077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.237109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.237292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.237327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.237510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.237543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.237743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.237777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.237902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.237935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.238860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.238893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.239915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.239948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.240125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.240168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.240359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.240392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.240500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.240534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.240642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.240675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.240924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.240957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.241126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.241168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.241364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.241396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.241545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.241578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.241692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.241725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.304 qpair failed and we were unable to recover it. 00:27:27.304 [2024-12-11 15:08:20.241895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.304 [2024-12-11 15:08:20.241928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.242034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.242067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.242239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.242273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.242464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.242498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.242704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.242737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.242905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.242938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.243922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.243961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.244100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.244343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.244482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.244681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.244827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.244995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.245028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.245266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.245300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.245494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.245526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.245640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.245673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.245859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.245892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.246094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.246297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.246449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.246596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.246881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.246997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.247952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.247985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.248234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.248268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.248384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.248418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.248587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.248621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.248722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.248755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.248916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.248990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.305 [2024-12-11 15:08:20.249280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.305 [2024-12-11 15:08:20.249320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.305 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.249440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.249475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.249669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.249961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.249995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.250174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.250209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.250378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.250412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.250581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.250614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.250826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.250860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.250975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.251201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.251351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.251556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.251700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.251851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.251885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.252112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.252303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.252337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.252506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.252654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.252686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.252864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.252898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.253014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.253048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.253236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.253271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.253391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.253424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.253600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.253634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.253804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.253838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.254926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.254959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.255923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.255956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.256062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.256197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.256232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.256349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.256381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.256549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.306 [2024-12-11 15:08:20.256587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.306 qpair failed and we were unable to recover it. 00:27:27.306 [2024-12-11 15:08:20.256755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.256788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.256997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.257029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.257201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.257235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.257441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.257474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.257652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.257685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.257787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.257820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.257989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.258022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.258137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.258204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.258388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.258421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.258606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.258638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.258748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.258780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.258968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.259000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.259180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.259216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.259467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.259501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.259765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.259799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.259967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.260887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.260990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.261133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.261352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.261557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.261778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.261941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.261979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.262896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.262929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.263956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.307 [2024-12-11 15:08:20.263989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.307 qpair failed and we were unable to recover it. 00:27:27.307 [2024-12-11 15:08:20.264105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.264139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.264374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.264477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.264509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.264621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.264655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.264844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.264877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.265140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.265183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.265415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.265448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.265623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.265657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.265842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.265874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.265976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.266191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.266336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.266543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.266703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.266910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.266950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.267148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.267194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.267313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.267346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.267514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.267549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.267747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.267781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.267897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.267931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.268103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.268136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.268321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.268354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.268548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.268582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.268756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.268790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.268903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.268936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.269048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.269082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.269342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.269378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.269585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.269619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.269744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.269776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.269897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.269930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.270131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.270354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.270491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.270731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.270880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.270991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.271025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.271130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.271172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.308 [2024-12-11 15:08:20.271295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.308 [2024-12-11 15:08:20.271328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.308 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.271540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.271572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.271685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.271719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.271835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.272035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.272073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.272250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.272286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.272456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.272490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.272660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.272694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.272906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.272940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.273203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.273237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.273341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.273375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.273491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.273524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.273650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.273684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.273851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.273885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.274053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.274087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.274257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.274292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.274403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.274435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.274696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.274730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.274911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.275116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.275149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.275276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.275311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.275415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.275448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.275640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.275674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.275797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.275831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.276034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.276192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.276419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.276649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.276865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.276980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.277013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.277132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.277176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.277348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.277382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.277502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.277535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.277861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.277895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.278017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.278053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.278201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.278236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.278375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.278552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.278585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.278793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.279007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.279040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.279168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.279203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.279321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.279353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.279523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.279557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.309 [2024-12-11 15:08:20.279733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.309 [2024-12-11 15:08:20.279766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.309 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.280806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.280997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.281031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.281316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.281436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.281654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.281689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.281917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.281963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.282118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.282204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.282348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.282392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.282517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.282552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.282660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.282694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.282890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.282923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.283949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.284182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.284219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.284438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.284487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.284722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.284760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.284875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.284908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.285962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.285997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.286121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.286155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.286372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.286417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.286567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.286614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.310 [2024-12-11 15:08:20.286747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.310 [2024-12-11 15:08:20.286787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.310 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.286895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.286928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.287879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.287912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.288899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.288932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.289956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.289995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.290201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.290236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.290350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.290384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.290557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.290591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.290726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.290947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.290982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.595 [2024-12-11 15:08:20.291087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.595 [2024-12-11 15:08:20.291122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.595 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.291320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.291355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.291462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.291495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.291683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.291718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.291819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.291854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.291964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.291998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.292184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.292221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.292352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.292387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.292510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.292544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.292714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.292747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.292934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.292969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.293096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.293129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.293307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.293341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.293510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.293543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.293661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.293695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.293820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.293854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.294076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.294246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.294453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.294590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.294797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.294973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.295129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.295366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.295571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.295716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.295943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.296130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.296306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.296470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.296612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.296759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.296981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.297015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.297189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.297224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.297335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.297368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.297527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.297601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.297825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.297896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.298100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.298137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.298268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.298303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.298415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.596 [2024-12-11 15:08:20.298449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.596 qpair failed and we were unable to recover it. 00:27:27.596 [2024-12-11 15:08:20.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.298787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.298820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.298937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.298971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.299208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.299417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.299557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.299707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.299847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.299952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.300126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.300323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.300491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.300731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.300888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.300921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.301091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.301124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.301247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.301282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.301452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.301486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.301590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.301622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.301798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.301831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.302921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.302956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.303870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.303902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.304037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.304070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.304246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.304280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.304496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.304529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.304636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.304669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.304790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.304836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.305055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.305126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.305285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.305323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.597 qpair failed and we were unable to recover it. 00:27:27.597 [2024-12-11 15:08:20.305498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.597 [2024-12-11 15:08:20.305531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.305713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.305745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.305865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.305898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.306950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.306983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.307942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.307975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.308078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.308112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.308237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.308271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.308459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.308493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.308668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.308701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.308807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.308839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.309931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.309965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.310069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.310103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.310280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.310314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.310424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.310457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.310650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.310684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.310807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.310840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.311936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.311975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.312156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.312208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.598 [2024-12-11 15:08:20.312401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.598 [2024-12-11 15:08:20.312435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.598 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.312545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.312578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.312693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.312725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.312915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.312949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.313872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.313904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.314040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.314246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.314420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.314563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.314812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.314987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.315151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.315408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.315609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.315777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.315916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.315949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.316932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.316965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.317108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.317328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.317540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.317744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.317891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.317996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.318030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.318135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.599 [2024-12-11 15:08:20.318180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.599 qpair failed and we were unable to recover it. 00:27:27.599 [2024-12-11 15:08:20.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.318331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.318543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.318576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.318691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.318723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.318823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.318856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.318974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.319007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.319118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.319152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.319283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.319316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.319599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.319632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.319804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.319838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.320914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.321925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.321957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.322060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.322093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.322273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.322307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.322480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.322513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.322682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.322714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.322880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.322912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.323953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.323986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.324155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.324208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.324385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.324418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.324589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.324621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.324794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.324826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.324931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.600 [2024-12-11 15:08:20.324964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.600 qpair failed and we were unable to recover it. 00:27:27.600 [2024-12-11 15:08:20.325069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.325102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.325381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.325415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.325530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.325563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.325730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.325762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.325871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.325903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.326052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.326085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.326212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.326245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.326440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.326473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.326654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.326686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.326804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.326837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.327003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.327035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.327167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.327199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.327415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.327448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.327686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.327719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.327887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.327919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.328121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.328343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.328493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.328644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.328803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.328973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.329006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.329178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.329212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.329333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.329365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.329551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.329583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.329752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.329784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.329973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.330116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.330344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.330616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.330760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.330912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.330944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.331065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.331098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.331288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.331328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.331439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.331471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.331684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.331717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.331896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.331929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.332035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.332068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.332239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.332274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.332447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.332479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.601 qpair failed and we were unable to recover it. 00:27:27.601 [2024-12-11 15:08:20.332717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.601 [2024-12-11 15:08:20.332750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.332923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.332957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.333084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.333116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.333386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.333420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.333602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.333634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.333744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.333776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.333888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.333921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.334873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.334905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.335139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.335182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.335290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.335323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.335492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.335525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.335706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.335740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.335909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.335942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.336111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.336145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.336264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.336298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.336510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.336543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.336735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.336768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.336887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.336919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.337089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.337121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.337293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.337327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.337604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.337637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.337750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.337783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.338045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.338078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.338328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.338362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.338557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.338590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.338704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.338736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.338859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.338892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.339062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.339283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.339322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.339491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.339524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.339705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.339737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.339907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.339941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.340066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.340099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.340279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.602 [2024-12-11 15:08:20.340314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.602 qpair failed and we were unable to recover it. 00:27:27.602 [2024-12-11 15:08:20.340422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.340455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.340625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.340657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.340781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.340813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.341816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.341850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.342930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.342962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.343133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.343195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.343350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.343464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.343496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.343668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.343701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.343881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.343913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.344939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.344972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.345177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.345211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.345341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.345373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.345587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.345620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.345787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.345926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.345958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.346132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.346176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.346352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.346384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.346636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.346670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.346838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.346876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.346997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.347030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.347201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.347236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.347441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.347643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.603 [2024-12-11 15:08:20.347675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.603 qpair failed and we were unable to recover it. 00:27:27.603 [2024-12-11 15:08:20.347796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.347828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.347929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.347961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.348072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.348105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.348304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.348338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.348507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.348540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.348641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.348673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.348933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.348966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.349959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.349992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.350170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.350204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.350374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.350406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.350584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.350617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.350792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.350825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.351062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.351221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.351369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.351644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.351854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.351983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.352015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.352191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.352225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.352449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.352483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.352590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.352624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.352795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.352827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.352995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.353140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.353349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.353503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.353749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.353951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.353984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.354149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.354190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.604 qpair failed and we were unable to recover it. 00:27:27.604 [2024-12-11 15:08:20.354296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.604 [2024-12-11 15:08:20.354332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.354435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.354469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.354635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.354667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.354837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.354869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.354973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.355245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.355404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.355555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.355757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.355957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.355990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.356091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.356124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.356314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.356347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.356449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.356481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.356599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.356632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.356814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.356846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.357911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.357945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.358169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.358203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.358382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.358414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.358602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.358639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.358765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.358801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.358993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.359193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.359356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.359570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.359770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.359911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.359944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.360060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.360093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.360332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.360366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.360484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.360517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.360701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.360733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.360905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.360939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.361125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.361167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.361339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.361371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.361541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.361574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.605 qpair failed and we were unable to recover it. 00:27:27.605 [2024-12-11 15:08:20.361759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.605 [2024-12-11 15:08:20.361792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.361964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.362932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.362965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.363204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.363237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.363407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.363440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.363553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.363586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.363716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.363748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.363851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.363884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.364052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.364086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.364344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.364377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.364561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.364593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.364784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.364817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.364986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.365018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.365132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.365173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.365365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.365398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.365596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.365627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.365823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.365855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.365972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.366856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.366889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.367082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.367114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.367319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.367352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.367557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.367590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.367720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.367752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.367931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.367965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.368103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.368255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.368460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.368668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.368808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.606 qpair failed and we were unable to recover it. 00:27:27.606 [2024-12-11 15:08:20.368983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.606 [2024-12-11 15:08:20.369016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.369219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.369252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.369442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.369481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.369663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.369695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.369800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.369834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.369934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.370901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.370934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.371076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.371299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.371447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.371647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.371804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.371999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.372999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.373134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.373282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.373432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.373578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.373743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.373777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.374074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.374265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.374441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.374579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.374793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.374980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.375201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.375352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.375497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.375747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.607 [2024-12-11 15:08:20.375882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.607 [2024-12-11 15:08:20.375914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.607 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.376052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.376477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.376819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.376991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.377953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.377986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.378091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.378125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.378305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.378342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.378514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.378547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.378717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.378750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.378963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.378996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.379259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.379293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.379407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.379440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.379637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.379670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.379841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.379874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.380073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.380225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.380444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.380659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.380799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.380971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.381176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.381313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.381509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.381726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.381939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.381974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.382151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.382201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.382311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.608 [2024-12-11 15:08:20.382344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.608 qpair failed and we were unable to recover it. 00:27:27.608 [2024-12-11 15:08:20.382464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.382497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.382718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.382752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.382873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.382907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.383075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.383108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.383249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.383283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.383394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.383427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.383596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.383629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.383805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.383839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.384009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.384043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.384226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.384260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.384445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.384483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.384661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.384695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.384864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.384897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.385859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.385892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.386946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.386980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.387083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.387115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.387294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.387328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.390376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.390413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.390537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.390568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.390685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.390716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.390893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.390927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.391120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.391153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.391335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.391369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.391537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.391571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.391756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.391790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.391908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.391941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.392114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.392147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.392344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.392377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.392553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.392586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.609 [2024-12-11 15:08:20.392781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.609 [2024-12-11 15:08:20.392814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.609 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.392914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.392948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.393873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.393907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.394966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.394998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.395904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.395937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.396130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.396176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.396345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.396378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.396552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.396585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.396760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.396794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.396970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.397004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.397192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.397228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.397410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.397443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.397616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.397650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.397822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.397856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.398027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.398061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.398179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.398212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.398321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.398355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.398547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.398580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.398747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.398780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.399915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.399948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.610 qpair failed and we were unable to recover it. 00:27:27.610 [2024-12-11 15:08:20.400116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.610 [2024-12-11 15:08:20.400150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.400354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.400388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.400493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.400527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.400792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.400825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.400993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.401198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.401421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.401627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.401777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.401926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.401959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.402897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.402930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.403192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.403226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.403333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.403367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.403560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.403593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.403710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.403744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.403859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.403893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.404066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.404374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.404409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.404524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.404557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.404749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.404785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.404987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.405136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.405352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.405555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.405727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.405945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.405978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.406170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.406205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.406312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.406345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.406536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.406569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.406789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.406823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.406927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.406959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.407076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.407109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.407268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.407302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.407463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.407537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.407773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.611 [2024-12-11 15:08:20.407843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.611 qpair failed and we were unable to recover it. 00:27:27.611 [2024-12-11 15:08:20.408055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.408214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.408250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.408425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.408460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.408647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.408680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.408855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.408887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.409988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.410117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.410169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.410349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.410382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.410555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.410589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.410755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.410787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.410903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.410936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.411039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.411073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.411264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.411298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.411485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.411517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.411655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.411689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.411860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.411893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.412012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.412045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.412217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.412251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.412359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.412393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.412602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.412635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.412839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.412871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.413101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.413133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.413408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.413442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.413554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.413586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.413703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.413737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.413857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.413889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.414960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.414992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.415096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.415129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.415292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.612 [2024-12-11 15:08:20.415365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.612 qpair failed and we were unable to recover it. 00:27:27.612 [2024-12-11 15:08:20.415608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.415644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.415762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.415795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.415970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.416111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.416257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.416494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.416640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.416940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.416973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.417235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.417269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.417520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.417552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.417659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.417692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.417798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.417831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.417948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.417982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.418179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.418213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.418326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.418360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.418499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.418531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.418702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.418734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.418942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.418975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.419899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.419931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.420911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.420944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.421126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.421170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.421386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.421419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.421527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.421559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.421730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.421763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.421998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.613 [2024-12-11 15:08:20.422033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.613 qpair failed and we were unable to recover it. 00:27:27.613 [2024-12-11 15:08:20.422203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.422238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.422407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.422441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.422551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.422584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.422704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.422736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.422865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.422904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.423020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.423052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.423224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.423258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.423472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.423505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.423693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.423726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.423847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.423879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.424134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.424279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.424430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.424637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.424773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.424968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.425001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.425296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.425332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.425578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.425611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.425742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.425775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.425893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.425927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.426100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.426133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.426313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.426347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.426519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.426553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.426730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.426763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.426936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.426969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.427960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.427994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.428111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.428144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.428338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.428372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.428546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.428579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.428686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.428719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.428821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.428854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.429044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.429077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.429245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.429280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.429453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.614 [2024-12-11 15:08:20.429487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.614 qpair failed and we were unable to recover it. 00:27:27.614 [2024-12-11 15:08:20.429609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.429642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.429773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.429807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.429912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.429945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.430054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.430287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.430320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.430507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.430546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.430718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.430751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.430869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.430902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.431109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.431258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.431483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.431732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.431868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.431988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.432213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.432438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.432656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.432808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.432947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.432980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.433169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.433204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.433406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.433439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.433627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.433782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.433814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.433971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.434131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.434283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.434500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.434650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.434858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.434968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.435001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.435104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.435138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.435368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.435402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.435580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.435613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.435793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.435826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.436065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.436099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.436279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.436313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.436481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.436514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.436628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.436661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.436877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.615 [2024-12-11 15:08:20.436910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.615 qpair failed and we were unable to recover it. 00:27:27.615 [2024-12-11 15:08:20.437029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.437062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.437179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.437214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.437395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.437427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.437635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.437669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.437877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.437986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.438886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.438983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.439356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.439555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.439712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.439941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.439975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.440953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.440988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.441152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.441195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.441378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.441411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.441578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.441612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.441787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.441821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.441925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.441958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.442067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.442100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.442289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.442323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.442495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.442528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.442724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.442757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.442873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.442905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.443906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.443939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.616 qpair failed and we were unable to recover it. 00:27:27.616 [2024-12-11 15:08:20.444135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.616 [2024-12-11 15:08:20.444179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.444298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.444331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.444446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.444478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.444692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.444725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.444904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.444937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.445060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.445093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.445207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.445242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.445421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.445655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.445688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.445854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.445888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.446936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.446969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.447156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.447221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.447326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.447359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.447529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.447562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.447802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.447835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.447949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.447981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.448212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.448365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.448528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.448679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.448881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.448981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.449864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.449970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.450195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.450229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.450373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.617 [2024-12-11 15:08:20.450406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.617 qpair failed and we were unable to recover it. 00:27:27.617 [2024-12-11 15:08:20.450593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.450626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.450810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.450843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.451058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.451092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.451207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.451241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.451430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.451464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.451636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.451668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.451884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.451917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.452119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.452332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.452366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.452500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.452532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.452650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.452682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.452853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.452886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.453000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.453039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.453207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.453242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.453501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.453534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.453708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.453740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.453935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.453968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.454077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.454110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.454234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.454267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.454480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.454513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.454709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.454742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.454855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.454889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.455079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.455111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.455301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.455336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.455507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.455541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.455656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.455689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.455875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.455908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.456091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.456123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.456391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.456426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.456599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.456632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.456749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.456782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.456902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.456934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.457147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.457273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.457306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.457474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.457507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.457629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.457662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.457857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.457891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.458057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.458090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.458265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.618 [2024-12-11 15:08:20.458299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-12-11 15:08:20.458414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.458448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.458550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.458583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.458705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.458738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.458863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.458896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.459909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.459943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.460116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.460148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.460325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.460358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.460470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.460509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.460676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.460709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.461103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.461382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.461519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.461756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.461957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.461991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.462226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.462261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.462382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.462415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.462521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.462553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.462727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.462760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.462880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.462912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.463030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.463064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.463194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.463228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.463429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.463461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.463631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.463664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.463843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.463876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.464050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.464083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.464266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.464300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.464487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.464520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.464634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.464667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.464871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.464905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.465073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.465105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.465283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.465318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.465503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.619 [2024-12-11 15:08:20.465535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-12-11 15:08:20.465651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.465684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.465886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.465920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.466835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.466869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.467068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.467100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.467317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.467352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.467472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.467506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.467699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.467732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.467902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.467935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.468121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.468153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.468351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.468391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.468513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.468546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.468715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.468748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.468929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.468961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.469141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.469186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.469358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.469391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.469495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.469528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.469727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.469759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.469860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.469893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.470908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.470942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.471908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.471941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.472138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.472180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.472322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.472354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.472523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.472557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.472680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.472712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-12-11 15:08:20.472823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.620 [2024-12-11 15:08:20.472857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.473031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.473064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.473299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.473371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.473574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.473611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.473736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.473770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.473962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.473995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.474174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.474210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.474315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.474468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.474501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.474672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.474705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.474888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.474921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.475075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.475231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.475436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.475636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.475975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.476855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.476980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.477183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.477386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.477588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.477896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.477929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.478106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.478140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.478359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.478393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.478515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.478549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.478670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.478703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.478872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.478906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.479073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.479107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.479313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.621 [2024-12-11 15:08:20.479348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.621 qpair failed and we were unable to recover it. 00:27:27.621 [2024-12-11 15:08:20.479462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.479496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.479703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.479738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.479848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.479881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.480937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.480971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.481085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.481118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.481293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.481326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.481568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.481602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.481773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.481805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.481976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.482181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.482385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.482588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.482733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.482935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.482968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.483919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.483952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.484943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.485112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.485145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.485259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.485293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.485483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.485518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.485694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.485727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.485897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.485930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.486041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.486073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.486337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.486371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.486480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.486513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.622 qpair failed and we were unable to recover it. 00:27:27.622 [2024-12-11 15:08:20.486617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.622 [2024-12-11 15:08:20.486650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.486847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.486881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.486996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.487029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.487225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.487259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.487448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.487481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.487613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.487645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.487816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.487850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.488069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.488101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.488301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.488340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.488459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.488700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.488734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.488904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.488937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.489123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.489166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.489339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.489565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.489599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.489724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.489756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.490909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.490949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.491054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.491087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.491294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.491329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.491432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.491466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.491707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.491821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.491855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.492848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.492882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.493932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.493965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.623 qpair failed and we were unable to recover it. 00:27:27.623 [2024-12-11 15:08:20.494177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.623 [2024-12-11 15:08:20.494212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.494406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.494555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.494589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.494688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.494721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.494823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.494857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.494979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.495221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.495375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.495511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.495741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.495904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.495937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.496044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.496078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.496212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.496247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.496362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.496395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.496614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.496880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.496914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.497022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.497054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.497239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.497273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.497463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.497497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.497666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.497699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.497903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.497936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.498117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.498150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.498266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.498304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.498405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.498437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.498671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.498824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.498856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.499032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.499065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.499192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.499227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.499397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.499429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.499602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.499635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.499854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.499887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.500966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.500999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.501126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.501166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.501294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.501328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.624 qpair failed and we were unable to recover it. 00:27:27.624 [2024-12-11 15:08:20.501528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.624 [2024-12-11 15:08:20.501561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.501662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.501695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.501864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.501897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.502921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.502954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.503080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.503112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.503315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.503353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.503470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.503504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.503705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.503739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.503906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.503939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.504055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.504088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.504299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.504334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.504556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.504590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.504704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.504737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.504856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.504889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.505065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.505289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.505497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.505652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.505816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.505988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.506021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.506279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.506314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.506426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.506460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.506629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.506662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.506850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.506884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.507057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.507091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.507207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.507243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.507451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.507484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.507669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.507702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.507885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.507918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.625 [2024-12-11 15:08:20.508858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.625 [2024-12-11 15:08:20.508890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.625 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.509904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.509938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.510108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.510142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.510271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.510304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.510473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.510506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.510623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.510657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.510777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.510812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.511041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.511245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.511387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.511626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.511866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.511984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.512138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.512352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.512490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.512692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.512908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.512942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.513112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.513145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.513324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.513364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.513487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.513520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.513727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.513761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.513882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.513915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.514021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.514055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.514181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.514216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.514476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.514510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.514708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.514741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.514876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.514909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.515083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.515117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.515293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.515327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.626 [2024-12-11 15:08:20.515442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.626 [2024-12-11 15:08:20.515475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.626 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.515578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.515612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.515787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.515820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.515942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.515975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.516089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.516123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.516255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.516290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.516407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.516439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.516552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.516585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.516704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.516737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.517958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.517991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.518195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.518236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.518408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.518441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.518616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.518649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.518752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.518787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.518936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.519103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.519133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.519340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.519375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.519569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.519602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.519720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.519754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.519928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.519962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.520134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.520176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.520299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.520332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.520459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.520493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.520727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.520834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.520868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.521034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.521068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.521182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.521215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.521455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.521488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.521658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.521691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.521808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.521841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.522026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.522059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.522210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.522245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.522417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.522450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.627 [2024-12-11 15:08:20.522642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.627 [2024-12-11 15:08:20.522675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.627 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.522787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.522819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.522951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.522984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.523095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.523127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.523327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.523360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.523529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.523563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.523731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.523765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.523957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.523991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.524168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.524202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.524457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.524489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.524672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.524705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.524812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.524844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.524955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.524985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.525155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.525196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.525309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.525342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.525509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.525543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.525716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.525750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.525881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.525919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.526949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.526982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.527155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.527200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.527370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.527403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.527569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.527602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.527707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.527741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.527918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.527953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.528154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.528209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.528316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.528348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.528466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.528500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.528743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.528776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.528944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.528977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.529084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.529116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.529295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.529329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.529508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.529541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.529708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.529740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.529860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.529894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.530017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.628 [2024-12-11 15:08:20.530050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.628 qpair failed and we were unable to recover it. 00:27:27.628 [2024-12-11 15:08:20.530221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.530255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.530368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.530400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.530610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.530644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.530839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.530873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.531000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.531034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.531291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.531325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.531494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.531527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.531693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.531725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.531836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.531869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.532109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.532142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.532255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.532287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.532517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.532549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.532671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.532703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.532907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.532941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.533124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.533156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.533277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.533309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.533417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.533449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.533641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.533679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.533899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.533932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.534103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.534135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.534361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.534394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.534522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.534555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.534794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.534827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.534946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.534980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.535090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.535122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.535303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.535336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.535503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.535536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.535703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.535736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.535917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.535950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.536061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.536094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.536331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.536365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.536644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.536677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.536857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.536890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.537057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.537091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.537301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.537336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.537457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.537489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.537661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.537693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.537823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.537857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.538048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.629 [2024-12-11 15:08:20.538082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.629 qpair failed and we were unable to recover it. 00:27:27.629 [2024-12-11 15:08:20.538291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.538326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.538431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.538461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.538574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.538608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.538777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.538809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.538993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.539025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.539202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.539236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.539412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.539446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.539632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.539665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.539930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.539963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.540069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.540101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.540282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.540314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.540486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.540520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.540694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.540727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.540905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.540937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.541150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.541303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.541442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.541672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.541877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.541998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.542032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.542224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.542259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.542455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.542487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.542751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.542784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.542958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.542989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.543838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.543872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.544074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.544217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.544444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.544608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.630 [2024-12-11 15:08:20.544776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.630 qpair failed and we were unable to recover it. 00:27:27.630 [2024-12-11 15:08:20.544974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.545879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.545990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.546134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.546302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.546452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.546598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.546870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.546904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.547884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.547918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.548137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.548391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.548530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.548661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.548862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.548985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.549025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.549280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.549315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.549420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.549452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.549686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.549720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.549892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.549925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.550094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.550125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.550288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.550359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.550565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.550602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.550719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.550753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.550930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.550963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.551193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.551327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.551360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.551543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.551576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.551742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.551775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.551958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.551991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.552176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.631 [2024-12-11 15:08:20.552211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.631 qpair failed and we were unable to recover it. 00:27:27.631 [2024-12-11 15:08:20.552396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.552429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.552599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.552895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.552928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.553104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.553136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.553384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.553418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.553544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.553576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.553750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.553784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.553893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.553925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.554102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.554136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.554318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.554352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.554523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.554556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.554669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.554703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.554871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.555071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.555103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.555303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.555338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.555550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.555582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.555772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.555804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.555922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.555956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.556074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.556106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.556235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.556269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.556459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.556491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.556610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.556643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.556829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.556862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.557954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.557986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.558155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.558207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.558376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.558408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.558617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.558649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.558822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.558977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.559122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.559288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.559425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.559718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.632 qpair failed and we were unable to recover it. 00:27:27.632 [2024-12-11 15:08:20.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.632 [2024-12-11 15:08:20.559962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.560135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.560177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.560378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.560647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.560680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.560791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.560824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.561865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.562032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.562064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.562245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.562280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.562519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.562552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.562760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.562793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.562964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.562997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.563190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.563225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.563401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.563433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.563550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.563583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.563766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.563799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.563918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.563951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.564138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.564179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.564348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.564383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.564575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.564607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.564772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.564806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.564975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.565014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.565130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.565171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.565417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.565449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.565640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.565673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.565840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.565873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.566233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.566438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.566690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.566848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.566970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.567003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.567200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.567235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.567404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.567437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.633 [2024-12-11 15:08:20.567642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.633 [2024-12-11 15:08:20.567675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.633 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.567850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.567883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.568080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.568113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.568288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.568321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.568574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.568608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.568728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.568760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.568860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.568893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.569118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.569320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.569355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.569541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.569574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.569726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.569759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.569858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.569890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.570006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.570040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.570211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.570245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.570299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14dab20 (9): Bad file descriptor 00:27:27.634 [2024-12-11 15:08:20.570670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.570740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.570957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.570993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.571260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.571296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.571477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.571511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.571617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.571650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.571848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.571881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.572050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.572082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.572264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.572298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.572416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.572450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.572652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.572684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.572852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.572885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.573922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.573955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.574060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.574093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.574265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.574299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.574560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.574592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.574774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.574807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.574923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.574957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.575148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.575193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.575301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.575334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.575452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.634 [2024-12-11 15:08:20.575485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.634 qpair failed and we were unable to recover it. 00:27:27.634 [2024-12-11 15:08:20.575653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.575685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.575858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.575897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.576867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.576900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.577171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.577206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.577375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.577409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.577527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.577559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.577678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.577712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.577903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.577936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.578103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.578136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.578258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.578292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.578400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.578433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.578620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.578652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.578916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.578949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.579254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.579288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.579478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.579511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.579678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.579711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.579820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.579854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.580048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.580081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.580201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.580235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.580348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.580382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.580561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.580593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.580819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.580852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.581031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.581064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.581262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.581297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.581466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.581500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.581710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.581961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.581994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.582186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.582219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.582385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.582418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.582523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.635 [2024-12-11 15:08:20.582557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.635 qpair failed and we were unable to recover it. 00:27:27.635 [2024-12-11 15:08:20.582680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.582712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.582831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.582865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.583063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.583096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.583263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.583298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.583465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.583498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.583692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.583725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.583915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.583954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.584059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.584093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.584204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.584239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.584345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.584378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.584566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.584600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.584706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.585000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.585285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.585318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.585445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.585478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.585645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.585677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.585869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.585902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.586086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.586118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.586248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.586282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.586417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.586450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.586629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.586661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.586828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.586861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.587052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.587086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.587253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.587287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.587419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.587452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.587569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.587602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.587777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.587810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.588002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.588036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.588286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.588321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.588436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.588469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.588651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.588684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.588937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.588971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.589181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.589213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.589407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.589440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.589557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.589589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.589788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.589821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.589921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.589955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.590149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.590212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.590382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.590416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.636 qpair failed and we were unable to recover it. 00:27:27.636 [2024-12-11 15:08:20.590516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.636 [2024-12-11 15:08:20.590548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.590676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.590709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.590830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.590863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.590998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.591031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.591295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.591334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.591450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.591484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.591652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.591684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.591863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.591904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.592894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.593135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.593180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.593420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.593453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.593636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.593668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.593787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.593821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.593988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.594130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.594381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.594587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.594736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.594951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.594985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.595871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.595901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.596172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.596207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.596343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.596376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.596490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.596523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.596691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.596723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.596900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.596933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.597113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.597147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.597401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.597434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.597603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.597637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.597741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.597773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.637 qpair failed and we were unable to recover it. 00:27:27.637 [2024-12-11 15:08:20.597943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.637 [2024-12-11 15:08:20.597976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.598144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.598188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.598317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.598351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.598530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.598563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.598745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.598778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.598887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.598920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.599965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.599999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.600995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.601256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.601290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.601459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.601491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.601658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.601692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.601931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.601964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.602188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.602222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.602470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.602504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.602621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.602653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.602832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.602864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.602976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.603009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.603189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.603223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.603400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.603435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.603621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.603654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.603856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.603889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.604126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.604167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.604338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.604372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.604635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.604667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.604851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.604884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.605059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.605092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.605260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.605295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.605415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.605447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.605566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.605599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.605870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.605903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.606014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.638 [2024-12-11 15:08:20.606048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.638 qpair failed and we were unable to recover it. 00:27:27.638 [2024-12-11 15:08:20.606220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.606253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.606436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.606470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.606573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.606606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.606790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.606822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.606933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.606965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.607133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.607176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.607374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.607406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.607577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.607617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.607719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.607752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.608018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.608050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.608152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.608195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.608388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.608421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.608589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.608621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.608788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.608821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.609002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.609035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.609229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.609264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.609467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.609502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.609825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.609858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.610928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.610974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.611171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.611208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.611375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.611408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.611526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.611560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.611662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.611695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.611884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.611923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.612867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.612901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.613034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.613074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.613345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.639 [2024-12-11 15:08:20.613386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.639 qpair failed and we were unable to recover it. 00:27:27.639 [2024-12-11 15:08:20.613564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.613597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.613718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.613752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.613883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.613918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.614022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.614056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.614210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.614246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.614365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.614398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.614579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.614613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.614794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.614828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.615002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.615035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.615146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.615199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.640 [2024-12-11 15:08:20.615392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.640 [2024-12-11 15:08:20.615426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.640 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.615593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.615627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.615835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.615869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.616124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.616166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.616361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.616395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.616508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.616542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.616719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.616753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.616949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.616983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.617128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.617372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.617517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.617719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.617875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.617999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.618192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.618397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.618548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.618721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.618862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.618896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.619946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.619979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.620084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.935 [2024-12-11 15:08:20.620117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.935 qpair failed and we were unable to recover it. 00:27:27.935 [2024-12-11 15:08:20.620242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.620278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.620389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.620422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.620608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.620641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.620810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.620845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.621958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.621992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.622174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.622209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.622327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.622361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.622474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.622507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.622674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.622714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.622884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.622918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.623924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.623958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.624063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.624098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.624359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.624395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.624521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.624555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.624756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.624789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.624987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.625142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.625324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.625482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.625631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.625931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.626083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.626117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.626256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.626292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.626459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.626492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.626729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.626763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.626878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.626912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.627034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.627069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.627241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.627276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.627449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.936 [2024-12-11 15:08:20.627483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.936 qpair failed and we were unable to recover it. 00:27:27.936 [2024-12-11 15:08:20.627584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.627618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.627836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.627910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.628045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.628083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.628261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.628297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.630218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.630275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.630490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.630526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.630733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.630767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.630895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.630928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.631098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.631289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.631429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.631593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.631805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.631971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.632188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.632403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.632606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.632748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.632888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.632922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.633964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.633998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.634101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.634135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.634352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.634387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.634506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.634539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.634663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.634702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.634874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.634907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.635105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.635139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.635369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.635404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.635531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.635565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.635745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.635778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.635949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.635983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.636177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.636212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.636467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.636501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.636684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.636717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.636824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.636859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.937 [2024-12-11 15:08:20.636978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.937 [2024-12-11 15:08:20.637011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.937 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.637128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.637173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.637277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.637312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.637488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.637521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.639564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.639625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.639934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.639971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.640176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.640213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.640401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.640434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.640606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.640638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.640811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.640844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.640961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.640994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.641189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.641224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.641399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.641432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.641570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.641603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.641778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.641808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.641937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.641971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.642194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.642235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.642354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.642494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.642525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.642725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.642849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.642883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.643962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.643996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.644237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.644273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.644404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.644438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.644561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.644593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.644718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.644751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.644870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.644916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.645017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.645048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.645212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.645245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.645364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.645396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.645529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.645563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.645792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.645826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.646018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.646048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.646153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.938 [2024-12-11 15:08:20.646192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.938 qpair failed and we were unable to recover it. 00:27:27.938 [2024-12-11 15:08:20.646310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.646344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.646527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.646561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.646687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.646721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.646837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.646876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.646985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.647020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.647259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.647333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.647468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.647505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.647680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.647714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.647828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.647861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.647977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.648010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.649461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.649513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.649752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.649786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.649960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.649992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.650143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.650428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.650460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.650633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.650666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.650877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.650909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.651118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.651152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.651287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.651319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.651580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.651612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.651788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.651821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.651944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.651976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.652104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.652136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.652272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.652306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.652426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.652456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.652625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.652660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.652831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.652863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.653039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.653071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.653231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.653265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.653381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.653414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.653645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.653718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.653846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.653884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.939 [2024-12-11 15:08:20.654097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.939 [2024-12-11 15:08:20.654131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.939 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.654342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.654376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.654583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.654616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.654802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.654834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.655014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.655047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.655149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.655197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.655325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.655357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.655555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.655587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.655760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.655793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.656045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.656077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.656203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.656238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.656355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.656398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.656570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.656602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.656795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.656827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.657869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.657980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.658013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.658193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.658227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.658341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.658375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.658543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.658576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.660014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.660066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.660373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.660411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.660529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.660563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.660748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.660781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.660898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.660929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.661103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.661135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.661390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.661423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.661534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.661567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.661690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.661722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.661935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.661968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.662918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.940 [2024-12-11 15:08:20.662951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.940 qpair failed and we were unable to recover it. 00:27:27.940 [2024-12-11 15:08:20.663153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.663198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.663299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.663332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.663447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.663481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.663684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.663718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.663837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.663870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.664920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.664953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.665068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.665102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.665261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.665295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.665471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.665504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.665684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.665718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.665898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.665930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.666131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.666174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.666284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.666487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.666521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.666692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.666726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.666904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.666937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.667181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.667218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.667395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.667429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.667600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.667633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.667876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.667909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.668756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.668791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.669043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.669076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.669257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.669291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.669463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.669497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.669734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.669767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.669884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.669917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.670061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.670095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.670200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.670236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.941 qpair failed and we were unable to recover it. 00:27:27.941 [2024-12-11 15:08:20.670354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.941 [2024-12-11 15:08:20.670394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.670575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.670607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.670778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.670812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.670922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.670955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.671137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.671179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.671295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.671330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.671497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.671529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.671793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.671828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.672952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.672986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.673090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.673122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.673250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.673286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.673398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.673432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.673589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.673621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.673790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.673824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.674915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.674949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.675864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.675897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.676037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.676273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.676428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.676627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.676869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.676979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.677013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.677171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.677206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.942 [2024-12-11 15:08:20.677328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.942 [2024-12-11 15:08:20.677362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.942 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.677540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.677579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.677753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.677787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.677890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.677922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.678922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.678954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.679889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.679994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.680154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.680372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.680517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.680732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.680988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.681233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.681448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.681594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.681751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.681889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.681995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.682952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.683894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.943 [2024-12-11 15:08:20.683935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.943 qpair failed and we were unable to recover it. 00:27:27.943 [2024-12-11 15:08:20.684103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.684133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.684307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.684342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.684450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.684480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.684825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.684857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.685853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.685982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.686892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.686987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.687205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.687342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.687540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.687675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.687892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.687923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.688106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.688136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.688353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.688456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.688486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.688605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.688634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.688803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.688834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.944 [2024-12-11 15:08:20.689907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.944 [2024-12-11 15:08:20.689936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.944 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.690032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.690063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.690322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.690354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.690518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.690548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.690712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.690782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.690920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.690957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.691176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.691213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.691412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.691455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.691576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.691609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.691729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.691762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.691946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.691980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.692930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.692964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.693150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.693196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.693301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.693335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.693469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.693504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.693681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.693716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.693823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.693858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.694938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.694972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.695901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.695934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.696858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.696891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.945 qpair failed and we were unable to recover it. 00:27:27.945 [2024-12-11 15:08:20.697097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.945 [2024-12-11 15:08:20.697132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.697388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.697423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.697538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.697572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.697697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.697730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.697835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.697869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.698040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.698073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.698243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.698278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.698388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.698422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.698708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.698891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.698925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.699908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.699941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.700859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.700900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.701875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.701907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.702938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.702971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.946 [2024-12-11 15:08:20.703904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.946 [2024-12-11 15:08:20.703937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.946 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.704153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.704311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.704518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.704655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.704801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.704993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.705025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.705124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.705179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.705355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.705388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.705615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.705686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.705823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.705859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.706907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.706940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.707869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.707910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.708949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.708982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.709933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.709976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.710202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.710241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.710431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.710466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.710570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.710603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.710722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.710755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.710860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.947 [2024-12-11 15:08:20.710893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.947 qpair failed and we were unable to recover it. 00:27:27.947 [2024-12-11 15:08:20.711004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.711207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.711421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.711572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.711806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.711956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.711990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.712181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.712234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.712358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.712391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.712507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.712541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.712677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.712716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.712833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.712866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.713923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.713957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.714961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.714994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.715200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.715236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.715354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.715388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.715502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.715535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.715725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.715759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.715926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.715960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.716128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.716170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.716278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.716311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.716582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.716616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.716724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.716758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.716882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.716916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.717035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.717069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.717241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.717276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.948 qpair failed and we were unable to recover it. 00:27:27.948 [2024-12-11 15:08:20.717449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.948 [2024-12-11 15:08:20.717483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.717655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.717695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.717869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.717904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.718076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.718110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.718228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.718264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.718378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.718412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.718629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.718663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.718838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.718872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.719916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.719950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.720065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.720100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.720296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.720330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.720499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.720533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.720700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.720927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.720960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.721177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.721213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.721387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.721420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.721613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.721646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.721841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.721876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.721989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.722236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.722392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.722542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.722691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.722926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.722960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.723087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.723122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.723355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.723461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.723494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.723730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.723762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.723933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.723967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.724949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.949 [2024-12-11 15:08:20.724994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.949 qpair failed and we were unable to recover it. 00:27:27.949 [2024-12-11 15:08:20.725181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.725217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.725388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.725421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.725596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.725629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.725840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.725874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.726006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.726039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.726145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.726191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.726388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.726422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.726663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.726696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.726815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.726848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.727832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.727866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.728036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.728070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.728385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.728423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.728667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.728700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.728815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.728849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.728964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.728996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.729195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.729230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.729422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.729456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.729572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.729604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.729741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.729775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.729943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.729976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.730142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.730186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.730427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.730460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.730576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.730609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.730866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.730900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.731102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.731142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.731367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.731401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.731571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.731605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.731792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.731825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.731940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.732076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.732109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.732300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.732334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.732510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.732543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.732712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.732745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.732933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.732965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.950 [2024-12-11 15:08:20.733085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.950 [2024-12-11 15:08:20.733119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.950 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.733260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.733294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.733463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.733497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.733615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.733650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.733774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.733808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.734900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.734933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.735066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.735279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.735418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.735698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.735847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.735967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.736324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.736525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.736680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.736842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.736876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.737041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.737074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.737249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.737283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.737494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.737528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.737695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.737728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.737912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.737945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.738196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.738230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.738472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.738504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.738680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.738713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.738835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.738868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.738987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.739210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.739363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.739498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.739711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.739863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.739896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.740014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.740047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.740232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.740268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.740459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.740492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.951 qpair failed and we were unable to recover it. 00:27:27.951 [2024-12-11 15:08:20.740666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.951 [2024-12-11 15:08:20.740699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.740815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.740848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.741017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.741051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.741231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.741266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.741453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.741491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.741679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.741713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.742007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.742040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.742203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.742375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.742407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.742595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.742628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.742806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.742839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.743136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.743353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.743388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.743518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.743551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.743661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.743695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.743799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.743833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.744014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.744046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.744235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.744271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.744501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.744573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.744710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.744747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.744855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.744890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.745080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.745114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.745339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.745375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.745488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.745522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.745733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.745765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.745949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.745983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.746152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.746201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.746374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.746406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.746593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.746626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.746740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.746772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.746894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.746926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.747193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.747340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.747472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.747673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.747873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.747985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.748018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.748224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.748258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.748374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.748408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.952 [2024-12-11 15:08:20.748627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.952 [2024-12-11 15:08:20.748661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.952 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.748828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.748861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.749033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.749067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.749258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.749291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.749466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.749500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.749671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.749704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.749885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.749918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.750099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.750131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.750254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.750289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.750457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.750491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.750659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.750692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.750806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.750838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.751043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.751284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.751432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.751734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.751994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.752153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.752320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.752526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.752797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.752948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.752982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.753180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.753215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.753331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.753365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.753527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.753698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.753731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.753932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.753965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.754079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.754113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.754230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.754263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.754429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.754462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.754593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.754627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.754800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.754839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.755009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.755041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.755208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.953 [2024-12-11 15:08:20.755243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.953 qpair failed and we were unable to recover it. 00:27:27.953 [2024-12-11 15:08:20.755454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.755486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.755588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.755622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.755870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.755902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.756176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.756210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.756318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.756348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.756456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.756489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.756595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.756629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.756869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.756902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.757107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.757140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.757285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.757319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.757487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.757520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.757711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.757744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.757856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.757888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.758937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.758969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.759148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.759193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.759363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.759397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.759566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.759599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.759771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.759804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.759976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.760184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.760326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.760485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.760688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.760891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.760923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.761883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.761915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.762095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.762127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.762242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.762275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.762456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.762494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.954 [2024-12-11 15:08:20.762682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.954 [2024-12-11 15:08:20.762714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.954 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.762902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.762935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.763125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.763167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.763340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.763374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.763540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.763573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.763779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.763812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.763980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.764012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.764182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.764216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.764387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.764420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.764662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.764695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.764864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.764896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.765112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.765295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.765507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.765865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.765985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.766191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.766400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.766550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.766766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.766967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.766999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.767335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.767369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.767537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.767571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.767677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.767709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.767824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.767857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.768155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.768320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.768356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.768534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.768569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.768689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.768722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.768825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.768856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.769882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.769916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.770035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.770068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.770239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.770274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.955 [2024-12-11 15:08:20.770465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.955 [2024-12-11 15:08:20.770498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.955 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.770623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.770656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.770832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.770865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.770970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.771807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.771980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.772182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.772386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.772587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.772731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.772950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.772989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.773106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.773140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.773291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.773326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.773497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.773530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.773635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.773669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.773854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.773888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.774081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.774115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.774298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.774333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.774525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.774558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.774796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.774831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.774941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.774974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.775097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.775131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.775265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.775298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.775468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.775501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.775683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.775716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.775824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.775858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.776050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.776083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.776275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.776310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.776523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.776556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.776680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.776714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.776902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.776936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.777181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.777215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.777408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.777442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.777550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.777583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.777796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.777908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.956 [2024-12-11 15:08:20.777942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.956 qpair failed and we were unable to recover it. 00:27:27.956 [2024-12-11 15:08:20.778072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.778105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.778232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.778272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.778445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.778478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.778644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.778676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.778865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.778898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.779070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.779104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.779231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.779265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.779506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.779540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.779647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.779680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.779795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.779829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.780956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.780988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.781094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.781128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.781369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.781442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.781571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.781609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.781901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.781934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.782043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.782076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.782199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.782233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.782398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.782431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.782552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.782582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.782786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.782816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.783961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.783990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.784154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.784198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.784439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.784470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.957 [2024-12-11 15:08:20.784643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.957 [2024-12-11 15:08:20.784672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.957 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.784941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.784972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.785882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.785913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.786104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.786136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.786330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.786362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.786529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.786562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.786832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.786866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.787076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.787107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.787232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.787263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.787374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.787406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.787574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.787605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.787731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.787763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.788023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.788056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.788180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.788214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.788383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.788416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.788633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.788666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.788770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.788802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.789048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.789082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.789278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.789312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.789482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.789516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.789622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.789654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.789822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.789855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.790044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.790076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.790221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.790391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.790423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.790592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.790625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.790808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.790842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.791009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.791042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.791257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.791290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.791544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.791577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.791680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.791718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.791889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.791922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.792181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.792216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.792389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.958 [2024-12-11 15:08:20.792422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.958 qpair failed and we were unable to recover it. 00:27:27.958 [2024-12-11 15:08:20.792533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.792566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.792687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.792720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.792835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.792868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.792975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.793129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.793300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.793542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.793675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.793905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.793942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.794181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.794215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.794342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.794376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.794571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.794604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.794719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.794753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.794921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.794955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.795125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.795167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.795411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.795445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.795558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.795592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.795711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.795866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.795900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.796107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.796141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.796321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.796355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.796526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.796560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.796671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.796704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.796892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.796931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.797044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.797077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.797248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.797283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.797386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.797419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.797586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.797621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.797886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.798028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.798063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.798271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.798306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.798484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.798517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.798729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.798762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.798966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.799000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.799193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.799227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.799398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.799431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.799602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.799636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.799762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.799796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.799984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.959 [2024-12-11 15:08:20.800017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.959 qpair failed and we were unable to recover it. 00:27:27.959 [2024-12-11 15:08:20.800130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.800173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.800293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.800325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.800564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.800597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.800793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.800827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.800997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.801030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.801203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.801237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.801518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.801552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.801723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.801756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.801872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.801905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.802092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.802125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.802360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.802430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.802617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.802661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.802840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.802873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.802988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.803192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.803393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.803591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.803761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.803963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.803996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.804177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.804212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.804382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.804415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.804530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.804563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.804746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.804783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.804895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.804926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.805071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.805242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.805453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.805653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.805815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.805980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.806116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.806273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.806476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.806707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.806924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.806957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.807081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.807113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.807290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.807322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.807527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.807559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.807758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.960 [2024-12-11 15:08:20.807801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.960 qpair failed and we were unable to recover it. 00:27:27.960 [2024-12-11 15:08:20.808008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.808042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.808297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.808332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.808548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.808580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.808692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.808723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.808835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.808867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.809104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.809329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.809470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.809623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.809838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.809996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.810148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.810382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.810530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.810678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.810824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.810856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.811892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.811922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.812186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.812219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.812394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.812426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.812616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.812648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.812810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.812918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.812950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.813059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.813090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.813270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.813302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.813417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.813449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.813634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.813666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.813833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.813865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.814042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.814074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.814185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.814215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.814326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.814359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.814528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.814559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.961 qpair failed and we were unable to recover it. 00:27:27.961 [2024-12-11 15:08:20.814668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.961 [2024-12-11 15:08:20.814700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.814815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.814848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.815907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.816106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.816138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.816374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.816406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.816546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.816579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.816755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.816970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.817857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.817992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.818209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.818347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.818546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.818743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.818955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.818987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.819155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.819199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.819306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.819337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.819450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.819483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.819671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.819702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.819879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.819911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.820129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.820396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.820441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.820558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.820593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.820764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.820796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.820926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.820958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.821173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.821206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.821321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.962 [2024-12-11 15:08:20.821352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.962 qpair failed and we were unable to recover it. 00:27:27.962 [2024-12-11 15:08:20.821546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.821578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.821691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.821723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.821824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.821856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.822024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.822056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.822305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.822417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.822449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.822617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.822649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.822825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.822857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.823074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.823228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.823476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.823618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.823823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.823998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.824143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.824311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.824508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.824707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.824851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.824882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.825063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.825095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.825290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.825324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.825443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.825475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.825647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.825679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.825873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.825905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.826898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.826929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.827121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.827153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.827286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.827318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.827519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.827683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.827720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.827894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.827926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.828053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.828084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.828238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.828272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.828385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.828417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.828526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.963 [2024-12-11 15:08:20.828557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.963 qpair failed and we were unable to recover it. 00:27:27.963 [2024-12-11 15:08:20.828722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.828754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.828934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.828966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.829065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.829096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.829314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.829347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.829458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.829491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.829613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.829644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.829827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.829859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.830888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.830920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.831886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.831918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.832066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.832243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.832446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.832652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.832801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.832995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.833218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.833374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.833524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.833818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.833849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.834934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.834965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.835077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.835108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.964 [2024-12-11 15:08:20.835320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.964 [2024-12-11 15:08:20.835354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.964 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.835577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.835609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.835717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.835749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.835846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.835878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.835979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.836122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.836265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.836474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.836609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.836756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.836788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.837015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.837047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.837147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.837189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.837359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.837390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.837578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.837610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.837781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.837811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.838892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.838923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.839862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.839893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.840898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.840929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.841064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.841263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.841417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.841623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.965 [2024-12-11 15:08:20.841770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.965 qpair failed and we were unable to recover it. 00:27:27.965 [2024-12-11 15:08:20.841872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.841903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.842917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.842948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.843846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.843878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.844891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.844920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.845856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.845889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.846957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.846986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.847084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.847112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.847311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.847341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.966 [2024-12-11 15:08:20.847523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.966 [2024-12-11 15:08:20.847552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.966 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.847722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.847750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.847853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.847882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.848920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.848951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.849181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.849313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.849484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.849681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.849886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.849991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.850905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.850933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.851043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.851071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.851246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.851279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.851445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.851476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.851593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.851624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.851728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.851759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.852848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.852877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.853924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.967 [2024-12-11 15:08:20.853952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.967 qpair failed and we were unable to recover it. 00:27:27.967 [2024-12-11 15:08:20.854061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.854191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.854385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.854578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.854722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.854860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.854892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.855938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.855966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.856156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.856317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.856439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.856591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.856785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.856989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.857901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.857932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.858110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.858141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.858320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.858351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.858470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.858502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.858612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.858643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.858831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.858863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.859029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.859059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.859248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.859320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.859451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.859487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.859599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.859633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.859804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.859836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.860005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.860038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.860149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.860194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.860370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.860401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.968 qpair failed and we were unable to recover it. 00:27:27.968 [2024-12-11 15:08:20.860604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.968 [2024-12-11 15:08:20.860636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.860744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.860776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.860950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.860982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.861215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.861366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.861534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.861729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.861880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.861989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.862125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.862273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.862424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.862655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.862795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.862826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.863933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.863965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.864150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.864193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.864311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.864343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.864510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.864542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.864722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.864754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.864861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.864894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.865935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.865968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.866074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.866106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.866310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.866343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.866601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.866672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.866812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.866847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.867018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.867050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.867233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.867268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.867392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.867424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.867663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.969 qpair failed and we were unable to recover it. 00:27:27.969 [2024-12-11 15:08:20.867843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.969 [2024-12-11 15:08:20.867874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.868881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.868996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.869145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.869383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.869524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.869744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.869886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.869917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.870821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.870853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.871882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.871914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.872958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.872990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.873171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.873204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.873333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.873364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.873538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.873570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.873750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.873911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.970 [2024-12-11 15:08:20.873943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.970 qpair failed and we were unable to recover it. 00:27:27.970 [2024-12-11 15:08:20.874201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.874233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.874353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.874385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.874552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.874583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.874700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.874731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.874860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.874892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.874999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.875249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.875520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.875667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.875807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.875960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.875993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.876099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.876136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.876329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.876361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.876534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.876565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.876752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.876785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.876909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.876940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.877900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.877931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.878913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.878944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.879860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.880152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.880192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.880398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.880429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.880541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.971 [2024-12-11 15:08:20.880580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.971 qpair failed and we were unable to recover it. 00:27:27.971 [2024-12-11 15:08:20.880689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.880720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.880982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.881251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.881419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.881569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.881717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.881925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.881960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.882072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.882103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.882281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.882313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.882484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.882517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.882694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.882727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.882901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.882933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.883181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.883314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.883463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.883691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.883836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.883977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.884125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.884283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.884480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.884690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.884892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.885089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.885121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.885302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.885334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.885504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.885536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.885709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.885741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.885852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.885883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.886968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.886999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.887124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.887155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.887277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.887309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.887475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.887506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.887673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.972 [2024-12-11 15:08:20.887704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.972 qpair failed and we were unable to recover it. 00:27:27.972 [2024-12-11 15:08:20.887876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.887908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.888031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.888062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.888236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.888306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.888516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.888551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.888739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.888772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.888943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.888975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.889881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.889911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.890035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.890068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.890235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.890267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.890399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.890431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.890683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.890724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.890921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.890952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.891868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.891899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.892892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.892925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.893126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.893170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.893340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.893373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.893478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.893508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.893645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.893676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.893866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.893896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.894015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.894047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.894156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.894197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.894316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.894346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.894447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.894478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.973 [2024-12-11 15:08:20.894582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.973 [2024-12-11 15:08:20.894613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.973 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.894731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.894761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.894929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.894960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.895884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.895916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.896963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.896995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.897190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.897222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.897339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.897371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.897539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.897571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.897684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.897716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.897903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.897935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.898848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.898879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.899915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.899946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.900168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.900305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.900443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.900657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.900791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.900978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.974 [2024-12-11 15:08:20.901010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.974 qpair failed and we were unable to recover it. 00:27:27.974 [2024-12-11 15:08:20.901220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.901253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.901448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.901480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.901581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.901612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.901813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.901845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.901956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.901988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.902954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.902985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.903958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.903989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.904102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.904133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.904312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.904344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.904542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.904575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.904691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.904721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.904824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.904855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.905918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.905949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.906118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.906149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.906263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.906295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.906475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.906508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.906685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.906716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.906836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.975 [2024-12-11 15:08:20.906868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.975 qpair failed and we were unable to recover it. 00:27:27.975 [2024-12-11 15:08:20.907035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.907913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.907944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.908869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.908992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.909142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.909355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.909569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.909719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.909901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.910946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.910977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.911945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.911977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.912188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.912329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.912478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.912628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.912771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.912987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.913019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.976 [2024-12-11 15:08:20.913201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.976 [2024-12-11 15:08:20.913234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.976 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.913337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.913368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.913466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.913671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.913703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.913868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.913901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.914868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.914896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.915931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.915963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.916886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.916915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.917964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.917993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.918906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.918934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.977 [2024-12-11 15:08:20.919830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.977 qpair failed and we were unable to recover it. 00:27:27.977 [2024-12-11 15:08:20.919938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.919968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.920958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.920987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.921836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.921870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.922873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.922973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.923854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.923896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.924945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.924977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.925077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.925109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.925382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.925415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.925582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.925615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.925726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.925757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.925926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.925958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.926089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.926292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.926501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.926651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.978 [2024-12-11 15:08:20.926814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.978 qpair failed and we were unable to recover it. 00:27:27.978 [2024-12-11 15:08:20.926977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.927819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.927848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.928833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.928862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.929876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.929908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.930922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.930951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.931803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.931831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.979 [2024-12-11 15:08:20.932697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.979 [2024-12-11 15:08:20.932726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.979 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.932909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.932937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.933780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.933987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.934187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.934324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.934444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.934666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.934859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.935884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.935995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.936138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.936320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.936545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.936666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.936850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.936876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.980 [2024-12-11 15:08:20.937035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.980 [2024-12-11 15:08:20.937061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.980 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.937294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.937323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.937416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.937441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.937649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.937833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.937865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.938890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.938993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.939920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.939946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.940106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.940301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.940372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.940500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.940537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.940802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.940835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.941086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.941306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.941479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.941626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.941789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.941964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.942184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.942328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.942465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.942627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.942831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.942863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.943948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.943980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.944093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.944124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.944362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.944478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.944509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.944629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.981 [2024-12-11 15:08:20.944664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.981 qpair failed and we were unable to recover it. 00:27:27.981 [2024-12-11 15:08:20.944773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.982 [2024-12-11 15:08:20.944804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.982 qpair failed and we were unable to recover it. 00:27:27.982 [2024-12-11 15:08:20.944923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.982 [2024-12-11 15:08:20.944956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.982 qpair failed and we were unable to recover it. 00:27:27.982 [2024-12-11 15:08:20.945058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.982 [2024-12-11 15:08:20.945089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:27.982 qpair failed and we were unable to recover it. 00:27:27.982 [2024-12-11 15:08:20.945214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.982 [2024-12-11 15:08:20.945246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:27.982 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.945348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.945375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.945534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.945565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.945674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.945707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.945813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.945844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.946283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.946422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.946663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.946857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.946979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.947180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.947395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.947541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.947693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.321 [2024-12-11 15:08:20.947836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.321 [2024-12-11 15:08:20.947868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.321 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.947995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.948228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.948389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.948587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.948738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.948956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.948994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.949146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.949357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.949506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.949878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.949997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.950216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.950362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.950501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.950665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.950872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.950903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.951825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.951994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.952149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.952323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.952456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.952623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.952834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.952866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.953908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.953939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.954113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.954143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.954272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.954304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.954483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.322 [2024-12-11 15:08:20.954514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.322 qpair failed and we were unable to recover it. 00:27:28.322 [2024-12-11 15:08:20.954613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.954644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.954847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.954878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.955875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.955912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.956879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.956977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.957908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.957940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.958894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.958925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.959936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.959967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.960079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.960111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.960284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.960362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.960594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.960652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.960833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.960866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.960969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.961002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.323 [2024-12-11 15:08:20.961155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.323 qpair failed and we were unable to recover it. 00:27:28.323 [2024-12-11 15:08:20.961279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.961312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.961502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.961535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.961653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.961685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.961793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.961826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.961930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.961961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.962068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.962100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.962237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.962271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.962440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.962471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.962595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.962637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.962838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.963965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.963997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.964223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.964357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.964492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.964632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.964845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.964970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.965945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.965976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.966085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.966117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.966294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.966326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.966438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.966468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.966633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.966666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.966793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.966824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.967024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.967056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.967197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.967244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.967385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.967422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.967525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.967559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.324 qpair failed and we were unable to recover it. 00:27:28.324 [2024-12-11 15:08:20.967666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.324 [2024-12-11 15:08:20.967699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.967811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.967847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.967974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.968189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.968355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.968501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.968728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.968941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.968975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.969169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.969211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.969408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.969445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.969565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.969599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.969715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.969749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.969918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.969957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.970907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.970939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.971933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.971969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.972868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.972899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.973883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.973914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.974086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.974124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.974295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.325 [2024-12-11 15:08:20.974328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.325 qpair failed and we were unable to recover it. 00:27:28.325 [2024-12-11 15:08:20.974442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.974474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.974579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.974610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.974712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.974744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.974923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.974955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.975927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.975958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.976172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.976206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.976329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.976361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.976552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.976584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.976699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.976730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.976943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.976974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.977090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.977121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.977337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.977370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.977607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.977639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.977751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.977781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.977963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.977994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.978202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.978374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.978407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.978600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.978632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.978740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.978771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.978965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.979898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.326 [2024-12-11 15:08:20.979924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.326 qpair failed and we were unable to recover it. 00:27:28.326 [2024-12-11 15:08:20.980014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.980895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.980922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.981830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.981856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.982884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.982910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.983941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.983967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.984204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.984388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.984517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.984650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.984839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.984996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.985153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.985370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.985570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.985768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.985951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.985977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.986078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.986104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.327 qpair failed and we were unable to recover it. 00:27:28.327 [2024-12-11 15:08:20.986212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.327 [2024-12-11 15:08:20.986256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.986356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.986387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.986504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.986535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.986719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.986750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.986859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.986891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.987953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.987986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.988104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.988134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.988250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.988282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.988468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.988501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.988722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.988754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.988854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.988885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.989888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.989918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.990084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.990115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.990297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.990329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.990431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.990463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.990627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.990659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.990857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.990890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.991049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.991193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.991339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.991537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.991787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.991974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.992006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.992221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3271490 Killed "${NVMF_APP[@]}" "$@" 00:27:28.328 [2024-12-11 15:08:20.992262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 [2024-12-11 15:08:20.992437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.992469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.328 15:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:28.328 15:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:28.328 15:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.328 15:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:28.328 15:08:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.328 [2024-12-11 15:08:20.994272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.328 [2024-12-11 15:08:20.994333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.328 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.994623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.994657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.994834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.994866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.995966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.995999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.996139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.996385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.996510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.996633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.996821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.996999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.997879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.997908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.998950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.998977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:20.999868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:20.999896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:21.000002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:21.000031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:21.000273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:21.000305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:21.000400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:21.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:21.000520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:21.000550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.329 [2024-12-11 15:08:21.000641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.329 [2024-12-11 15:08:21.000669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.329 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.000874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.000903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.001008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.001150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.001327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3272227 00:27:28.330 [2024-12-11 15:08:21.001542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.001688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3272227 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:28.330 [2024-12-11 15:08:21.001884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.001914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.002059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3272227 ']' 00:27:28.330 [2024-12-11 15:08:21.002171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.002203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.002311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.002340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.002438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.330 [2024-12-11 15:08:21.002468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.002625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.002655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:28.330 [2024-12-11 15:08:21.002837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.002866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.003026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.330 [2024-12-11 15:08:21.003055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.003233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.003264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:28.330 [2024-12-11 15:08:21.003377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.003409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.003513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.003542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.003645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.330 [2024-12-11 15:08:21.003675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.003849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.003878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.004930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.004958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.005073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.005102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.005224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.005259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.005427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.005457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.330 [2024-12-11 15:08:21.005640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.330 [2024-12-11 15:08:21.005669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.330 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.005853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.005881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.006964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.006990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.007963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.007989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.008961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.008988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.009176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.009205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.009366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.009393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.009497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.009525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.009635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.009661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.009862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.009889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.010861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.010888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.011753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.331 qpair failed and we were unable to recover it. 00:27:28.331 [2024-12-11 15:08:21.011978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.331 [2024-12-11 15:08:21.012006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.012880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.012908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.013968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.013995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.014952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.014979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.015963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.015989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.016858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.016885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.332 qpair failed and we were unable to recover it. 00:27:28.332 [2024-12-11 15:08:21.017960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.332 [2024-12-11 15:08:21.017988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.018974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.018999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.019898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.019993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.020894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.020990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.021964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.021990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.022921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.022948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.023071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.023178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.023206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.023299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.023326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.023419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.333 [2024-12-11 15:08:21.023446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.333 qpair failed and we were unable to recover it. 00:27:28.333 [2024-12-11 15:08:21.023617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.023811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.023838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.024965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.024992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.025937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.025964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.026888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.026913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.027890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.027915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.028889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.028913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.029002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.029038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.334 qpair failed and we were unable to recover it. 00:27:28.334 [2024-12-11 15:08:21.029150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.334 [2024-12-11 15:08:21.029186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.029292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.029316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.029424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.029448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.029544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.029569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.029737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.029769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.029871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.029896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.030876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.030901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.031943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.031979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.032139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.032184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.032359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.032521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.032563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.032678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.032715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.032838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.033037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.033196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.033348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.033573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.033811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.033987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.034232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.034459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.034624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.034769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.034915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.034950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.335 [2024-12-11 15:08:21.035067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.335 [2024-12-11 15:08:21.035102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.335 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.035341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.035380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.035491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.035526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.035641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.035675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.035802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.035837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.035963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.036155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.036398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.036544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.036688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.036931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.036963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.037138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.037179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.037369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.037401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.037537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.037568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.037757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.037791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.037906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.038112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.038145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.038334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.038369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.038565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.038596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.038719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.038753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.038883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.038917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.039028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.039062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.039218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.039253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.039427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.039462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.039628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.039659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.039832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.039861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.040032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.040071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.040183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.040216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.040339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.040615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.040647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.040762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.040793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.041059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.041093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.041309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.041343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.041519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.041553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.041675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.041708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.041817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.041850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.042020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.042048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.042225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.042259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.042371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.336 [2024-12-11 15:08:21.042404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.336 qpair failed and we were unable to recover it. 00:27:28.336 [2024-12-11 15:08:21.042521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.042552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.042667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.042700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.042823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.042855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.042978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.043922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.043954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.044890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.044921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.045996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.046975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.046994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.337 [2024-12-11 15:08:21.047952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.337 [2024-12-11 15:08:21.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.337 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.048887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.048906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.049893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.049912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.050941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.050961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.051915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.051974] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:28.338 [2024-12-11 15:08:21.052004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052022] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.338 [2024-12-11 15:08:21.052027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.338 [2024-12-11 15:08:21.052706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.338 qpair failed and we were unable to recover it. 00:27:28.338 [2024-12-11 15:08:21.052876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.052897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.052976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.052996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.053975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.053997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.054949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.055930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.055952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.056888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.056909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.057082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.057103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.057251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.057274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.339 [2024-12-11 15:08:21.057375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.339 [2024-12-11 15:08:21.057397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.339 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.057566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.057588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.057672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.057841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.057863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.058963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.058984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.059855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.059877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.060814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.060985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.061854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.061876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.062969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.062993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.063170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.063194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.340 [2024-12-11 15:08:21.063299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.340 [2024-12-11 15:08:21.063321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.340 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.063419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.063455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.063536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.063557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.063638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.063659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.063750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.063772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.063853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.063875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.064966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.064988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.065935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.065957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.066887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.066985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.067834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.067859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.068870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.068977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.069003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.341 [2024-12-11 15:08:21.069183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.341 [2024-12-11 15:08:21.069209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.341 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.069315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.069342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.069444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.069470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.069598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.069636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.069728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.069912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.069938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.070898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.070924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.071892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.071919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.072908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.072934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.073911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.073937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.074962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.074989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.075174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.075201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.075422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.075449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.075638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.075669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.075766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.342 [2024-12-11 15:08:21.075791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.342 qpair failed and we were unable to recover it. 00:27:28.342 [2024-12-11 15:08:21.075894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.075920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.076904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.076932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.077899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.077992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.078888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.078916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.079892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.079988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.080176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.080435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.080567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.080698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.080881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.080909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.081003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.081031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.343 [2024-12-11 15:08:21.081191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.343 [2024-12-11 15:08:21.081220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.343 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.081411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.081439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.081532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.081560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.081660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.081688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.081846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.081878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.082901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.082928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.083955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.083983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.084181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.084308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.084507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.084696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.084812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.084994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.085195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.085336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.085542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.085669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.085963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.085992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.086124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.086250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.086401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.086613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.086989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.087016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.087105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.087132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.087294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.087321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.087425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.344 [2024-12-11 15:08:21.087553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.344 [2024-12-11 15:08:21.087581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.344 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.087703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.087730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.087890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.087918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.088879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.088906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.089820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.089847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.090888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.090915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.091903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.091931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.092112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.092140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.092255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.092284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.092444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.092472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.092632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.092658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.092827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.092854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.093918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.093944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.094118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.094146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.345 [2024-12-11 15:08:21.094253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.345 [2024-12-11 15:08:21.094281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.345 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.094404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.094431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.094530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.094557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.094729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.094756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.094847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.094875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.094964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.094991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.095152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.095188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.095356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.095390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.095618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.095644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.095817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.095844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.096806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.096833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.097041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.097169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.097316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.097603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.097810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.097982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.098916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.098946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.099814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.099844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.100078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.100151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.100407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.100479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.100718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.100786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.100970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.101005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.101191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.346 [2024-12-11 15:08:21.101222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.346 qpair failed and we were unable to recover it. 00:27:28.346 [2024-12-11 15:08:21.101452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.101482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.101611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.101641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.101816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.101846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.102010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.102040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.102290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.102320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.102507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.102537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.102699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.102728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.102974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.103816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.103977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.104240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.104367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.104518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.104666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.104790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.104820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.105963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.105993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.106103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.106134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.106307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.106338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.106540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.106569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.106689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.106720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.106829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.106859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.107045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.107077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.107243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.107277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.107386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.107418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.107608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.107641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.107812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.107845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.108052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.108095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.108286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.108323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.108443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.108476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.108577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.347 [2024-12-11 15:08:21.108610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.347 qpair failed and we were unable to recover it. 00:27:28.347 [2024-12-11 15:08:21.108731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.108764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.108963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.108997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.109105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.109138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.109303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.109337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.109462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.109496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.109701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.109733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.109993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.110026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.110212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.110247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.110439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.110472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.110578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.110621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.110805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.110839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.111882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.111918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.112944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.112977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.113099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.113133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.113317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.113351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.113468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.113500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.113667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.113701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.113877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.113910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.114122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.114155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.114279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.114312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.348 qpair failed and we were unable to recover it. 00:27:28.348 [2024-12-11 15:08:21.114429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.348 [2024-12-11 15:08:21.114462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.114631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.114664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.114842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.114997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.115031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.115200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.115235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.115425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.115458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.115659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.115709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.115896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.115931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.116940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.116974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.117105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.117139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.117345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.117379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.117562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.117597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.117765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.117799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.117965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.117999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.118182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.118218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.118441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.118475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.118680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.118715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.118912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.118945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.119206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.119240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.119349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.119380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.119580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.119785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.119819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.119940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.119973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.120151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.120197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.120320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.120355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.120547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.120580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.120746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.120780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.120892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.120926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.121062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.121102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.121235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.121270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.121444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.121477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.121715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.121749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.121870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.121903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.122014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.122048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.349 [2024-12-11 15:08:21.122218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.349 [2024-12-11 15:08:21.122253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.349 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.122461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.122493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.122679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.122713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.122906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.122940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.123060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.123094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.123213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.123247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.123373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.123408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.123564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.123760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.123793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.124966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.124999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.125118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.125151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.125276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.125506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.125543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.125659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.125693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.125803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.125837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.126890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.126923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.127092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.127126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.127321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.127359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.127533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.127566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.127694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.127726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.127840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.127873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.128133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.128190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.128376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.128419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.128587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.128621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.128805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.128838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.129019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.129054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.129297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.129332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.129444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.129477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.129595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.350 [2024-12-11 15:08:21.129629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.350 qpair failed and we were unable to recover it. 00:27:28.350 [2024-12-11 15:08:21.129824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.129858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.129961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.129995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.130116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.130150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.130341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.130375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.130478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.130511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.130718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.130752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.130865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.130899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.131048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.131211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.131492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.131653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.131804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.131967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.132201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.132360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.132508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.132735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.132951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.132984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.133085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.133119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.133297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.133332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.133501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.133534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.133622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.351 [2024-12-11 15:08:21.133734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.133768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.133939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.133980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.134147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.134190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.134308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.134342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.134516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.134551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.134739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.134773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.351 [2024-12-11 15:08:21.134900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.351 [2024-12-11 15:08:21.134934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.351 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.135104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.135139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.135249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.135283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.135524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.135558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.135668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.135701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.135813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.135847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.136054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.136088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.136258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.136293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.136461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.136496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.136615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.136649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.136854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.136888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.137009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.137043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.137294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.137329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.137434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.137468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.137645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.137679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.137851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.137885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.138004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.138038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.138149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.138208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.138448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.138483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.138652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.138686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.138946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.138981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.139126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.139411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.139588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.139727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.139877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.139997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.140031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.140203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.140238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.140346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.140380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.352 qpair failed and we were unable to recover it. 00:27:28.352 [2024-12-11 15:08:21.140498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.352 [2024-12-11 15:08:21.140531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.140792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.140825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.140939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.140973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.141094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.141128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.141336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.141369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.141544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.141577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.141685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.141719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.141911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.141943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.142058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.142091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.142208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.142244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.142415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.142448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.142616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.142649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.142820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.142854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.143021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.143055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.143181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.143215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.143477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.143511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.143629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.143662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.143834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.143868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.144057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.144091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.144282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.144318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.144528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.144567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.144742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.144776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.144942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.144977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.145104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.145138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.145276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.145310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.145482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.145516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.145635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.145667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.145858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.145892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.146027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.146059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.146228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.146263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.146468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.146501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.353 [2024-12-11 15:08:21.146694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.353 [2024-12-11 15:08:21.146727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.353 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.146919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.146952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.147139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.147183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.147301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.147335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.147464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.147497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.147688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.147720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.147888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.147921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.148115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.148148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.148330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.148364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.148572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.148605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.148977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.149009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.149187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.149220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.149388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.149421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.149623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.149656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.149833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.149868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.149991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.150204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.150340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.150638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.150862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.150896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.151092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.151124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.151305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.151339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.151507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.151540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.151664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.151697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.151868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.151901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.152005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.152038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.152139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.152191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.152430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.152469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.152660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.152694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.152833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.152867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.153067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.153356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.153510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.153662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.153887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.153990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.154024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.154192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.154226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.354 [2024-12-11 15:08:21.154418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.354 [2024-12-11 15:08:21.154451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.354 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.154654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.154687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.154857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.154890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.155004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.155036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.155248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.155282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.155456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.155489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.155660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.155693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.155863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.155896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.156849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.156882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.157048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.157081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.157193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.157226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.157489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.157522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.157701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.157734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.157845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.157878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.158887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.158921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.159043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.159076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.159272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.159306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.159486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.159521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.159688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.159721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.159826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.159859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.160929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.160963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.161130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.161345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.161378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.161561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.355 [2024-12-11 15:08:21.161594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.355 qpair failed and we were unable to recover it. 00:27:28.355 [2024-12-11 15:08:21.161788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.161823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.162046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.162195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.162410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.162552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.162759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.162997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.163236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.163464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.163605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.163753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.163962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.163995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.164200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.164235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.164406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.164440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.164557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.164590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.164713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.164747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.164859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.164892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.165097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.165131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.165267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.165299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.165469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.165502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.165675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.165709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.165876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.165909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.166099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.166132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.166335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.166368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.166474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.166508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.166725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.166827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.166860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.167942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.167975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.168141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.168204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.168318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.168351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.168546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.168579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.168745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.168962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.168995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.356 [2024-12-11 15:08:21.169095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.356 [2024-12-11 15:08:21.169127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.356 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.169322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.169355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.169525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.169559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.169742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.169775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.169960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.169994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.170099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.170132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.170314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.170347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.170526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.170559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.170665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.170697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.170878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.170911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.171897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.171930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.172075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.172262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.172296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.172468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.172501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.172663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.172696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.172901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.172935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.173103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.173138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.173389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.173423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.173542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.173576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.173702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.173735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.173935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.173969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.174181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.174216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.174325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.174359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.174623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.174657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.174831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.174866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.175035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.175068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.175278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.175312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.175496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.175530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.175588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.357 [2024-12-11 15:08:21.175623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.357 [2024-12-11 15:08:21.175631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.357 [2024-12-11 15:08:21.175638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.357 [2024-12-11 15:08:21.175643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.357 [2024-12-11 15:08:21.175698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.175729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.175915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.175947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.176116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.176148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.176288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.176322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.357 [2024-12-11 15:08:21.176518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.357 [2024-12-11 15:08:21.176551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.357 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.176739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.176771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.177334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.177369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.177336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:27:28.358 [2024-12-11 15:08:21.177444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:27:28.358 [2024-12-11 15:08:21.177530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:27:28.358 [2024-12-11 15:08:21.177560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.177591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.177531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:27:28.358 [2024-12-11 15:08:21.177773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.177805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.178041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.178081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.178321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.178360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.178551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.178585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.178712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.178745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.178922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.178955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.179181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.179217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.179391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.179431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.179636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.179668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.179773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.179806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.179980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.180198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.180410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.180557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.180697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.180925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.180958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.181143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.181187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.181315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.181347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.181515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.181549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.181829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.181863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.182048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.182081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.182298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.182332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.182486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.182519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.182718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.182750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.182988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.183021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.183204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.183237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.183433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.183466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.183588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.183622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.358 [2024-12-11 15:08:21.183820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.358 [2024-12-11 15:08:21.183868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.358 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.184043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.184077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.184245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.184279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.184471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.184505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.184608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.184641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.184915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.184949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.185125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.185168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.185362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.185396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.185656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.185888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.185921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.186116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.186150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.186335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.186370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.186539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.186572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.186769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.186811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.187084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.187302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.187450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.187652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.187823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.187992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.188026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.188178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.188218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.188411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.188445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.188567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.188601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.188802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.188836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.189080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.189116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.189420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.189455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.189712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.189746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.190044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.190080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.190373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.190410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.190671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.190706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.190902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.190935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.191192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.191228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.191368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.191402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.191585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.191618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.191724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.191757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.192015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.192048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.192221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.192255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.192524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.192557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.192680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.192713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.359 [2024-12-11 15:08:21.192879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.359 [2024-12-11 15:08:21.192913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.359 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.193178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.193448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.193570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.193605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.193867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.193901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.194073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.194107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.194309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.194344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.194615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.194649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.194862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.194897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.195092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.195127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.195323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.195361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.195559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.195593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.195764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.195798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.195915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.195949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.196209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.196245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.196376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.196410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.196672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.196704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.196885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.196919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.197087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.197121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.197408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.197444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.197562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.197596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.197848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.197883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.198052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.198086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.198364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.198400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.198671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.198706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.198898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.198933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.199178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.199215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.199400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.199437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.199641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.199675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.199939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.199974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.200150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.200195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.200461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.200496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.200682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.200716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.200909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.200942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.201077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.201110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.201309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.201346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.201501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.201756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.201790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.201996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.202145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.360 [2024-12-11 15:08:21.202190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.360 qpair failed and we were unable to recover it. 00:27:28.360 [2024-12-11 15:08:21.202452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.202487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.202665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.202706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.202878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.202911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.203079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.203297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.203332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.203506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.203539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.203664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.203698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.203885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.203918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.204022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.204056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.204204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.204241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.204508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.204542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.204656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.204691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.204823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.204857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.205024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.205058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.205224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.205259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.205465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.205500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.205727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.205991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.206026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.206287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.206323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.206511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.206545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.206746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.206781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.206969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.207003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.207201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.207237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.207407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.207441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.207624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.207658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.207827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.207862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.208029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.208065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.208348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.208384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.208582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.208617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.208791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.208826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.209000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.209034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.209208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.209243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.209486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.209520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.209718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.209754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.209925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.209959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.210148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.210193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.210364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.210397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.210663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.210697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.210869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.361 [2024-12-11 15:08:21.210903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.361 qpair failed and we were unable to recover it. 00:27:28.361 [2024-12-11 15:08:21.211095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.211130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.211331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.211366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.211607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.211655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.211888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.212060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.212094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.212359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.212396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.212538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.212572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.212858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.212894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.213132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.213177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.213294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.213327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.213516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.213549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.213720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.213753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.213957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.214169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.214205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.214414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.214449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.214626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.214660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.214902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.214935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.215194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.215228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.215517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.215549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.215821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.215854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.216120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.216152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.216352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.216386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.216556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.216589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.216847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.216880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.217908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.217963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.218276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.218322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.218599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.218633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.218897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.218931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.362 [2024-12-11 15:08:21.219179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.362 [2024-12-11 15:08:21.219215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.362 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.219340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.219374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.219561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.219595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.219798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.219912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.219947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.220944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.220979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.221260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.221297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.221469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.221503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.221628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.221663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.221839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.221874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.222047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.222275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.222311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.222509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.222543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.222736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.222770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.222938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.222972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.223179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.223216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.223336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.223372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.223610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.223645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.223842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.223881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.224068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.224102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.224280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.224313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.224498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.224531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.224716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.224750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.224939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.224973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.225143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.225186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.225426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.225459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.225635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.225669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.225934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.225967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.226203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.226237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.226410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.226443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.226680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.226713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.226853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.226893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.227065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.227098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.227329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.227364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.363 [2024-12-11 15:08:21.227486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.363 [2024-12-11 15:08:21.227519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.363 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.227721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.227755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.227951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.227984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.228153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.228220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.228348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.228382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.228673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.228708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.228881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.228917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.229173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.229209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.229334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.229368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.229606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.229640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.229825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.229859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.230060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.230095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.230303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.230339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.230580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.230615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.230737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.230770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.231033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.231066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.231326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.231361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.231644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.231763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.231797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.231989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.232024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.232257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.232293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.232481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.232516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.232641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.232675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.232777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.232811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.233022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.233064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.233235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.233270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.233533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.233568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.233681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.233715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.233885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.233919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.234172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.234208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.234333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.234368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.234592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.234628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.234872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.234910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.235093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.235128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.235355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.235418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.235631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.235678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.236024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.236067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.236264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.236301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.236504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.236537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.236795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.236829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.236949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.236983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.237245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.364 [2024-12-11 15:08:21.237279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.364 qpair failed and we were unable to recover it. 00:27:28.364 [2024-12-11 15:08:21.237459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.237493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.237728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.237761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.237955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.237988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.238278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.238311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.238495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.238529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.238821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.238854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.239060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.239093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.239210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.239243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.239417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.239451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.239625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.239658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.239833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.239868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.240146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.240209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.240411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.240445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.240641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.240856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.240889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.241152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.241200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.241398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.241432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.241620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.241654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.241770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.241804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.241913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.241945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.242080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.242113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.242261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.242296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.242475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.242513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.242703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.242737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.242934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.242967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.243228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.243262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.243368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.243402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.243603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.243638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.243860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.243893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.244047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.244227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.244261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.244443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.244476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.244668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.244702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.244809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.244841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.245042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.245075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.245349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.245382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.245586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.245621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.245865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.245898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.246016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.246050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.246290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.246325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.246528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.246562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.246757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.246790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.365 [2024-12-11 15:08:21.246967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.365 [2024-12-11 15:08:21.247000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.365 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.247263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.247297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.247421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.247454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.247564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.247598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.247886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.247919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.248090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.248419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.248452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.248656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.248690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.248883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.248916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.249105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.249140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.249344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.249378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.249549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.249582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.249720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.249753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.249859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.249893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.250141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.250183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.250300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.250332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.250504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.250538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.250706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.250740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.250862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.250894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.251091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.251124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.251326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.251378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.251505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.251540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.251716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.251751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.251927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.251961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.252131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.252180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.252292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.252325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.252502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.252536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.252699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.252732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.252970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.253003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.253179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.253214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.253326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.253361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.253621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.253655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.253771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.253805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.253976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.254143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.254371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.254577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.254727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.366 qpair failed and we were unable to recover it. 00:27:28.366 [2024-12-11 15:08:21.254863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.366 [2024-12-11 15:08:21.254896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.255062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.255095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.255254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.255289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.255531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.255565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.255683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.255716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.255885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.255919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.256136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.256181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.256295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.256328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.256625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.256658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.256778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.256810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.256935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.256967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.257169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.257205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.257442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.257475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.257595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.257627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.257750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.257783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.257891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.257923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.258911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.258945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.259188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.259227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.259354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.259388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.259515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.259547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.259748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.259782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.259893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.259925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.367 [2024-12-11 15:08:21.260044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.367 [2024-12-11 15:08:21.260077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.367 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.260285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.260427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.260458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.260745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.260778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.261952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.261986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.262090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.262123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.262303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.262338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.262507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.262540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.262708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.262741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.262860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.262893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.263076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.263109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.263312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.263347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.263542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.263574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.263688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.263721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.263889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.263921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.264091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.264125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.264272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.264306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.264436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.264470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.264636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.264669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.264860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.264892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.265013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.265046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.265175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.265210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.265377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.265409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.265591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.265624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.265809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.265841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.266821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.266860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.267022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.267055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.267273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.267307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.267497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.267530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.368 [2024-12-11 15:08:21.267710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.368 [2024-12-11 15:08:21.267742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.368 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.267904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.267936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.268945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.268977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.269087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.269120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.269255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.269301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.269425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.269459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.269585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.269618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.269726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.269759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.270896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.270929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.271097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.271130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.271274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.271309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.271547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.271581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.271758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.271800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.271981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.272207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.272355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:28.369 [2024-12-11 15:08:21.272590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.272730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.272932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.272966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.369 [2024-12-11 15:08:21.273218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.273255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.273426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:28.369 [2024-12-11 15:08:21.273461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.273626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.273660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.369 [2024-12-11 15:08:21.273765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.273800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.273894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.273926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.274126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.274363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.274398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.274569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.369 [2024-12-11 15:08:21.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.369 qpair failed and we were unable to recover it. 00:27:28.369 [2024-12-11 15:08:21.274720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.274754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.274938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.274972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.275133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.275279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.275482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.275656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.276006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.276179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.276214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.276403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.276437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.276706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.276748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.276917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.276950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.277066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.277100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.277214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.277250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.277387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.277556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.277590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.277795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.277831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.278936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.278971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.279175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.279209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.279408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.279443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.279569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.279603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.279789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.279822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.280044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.280249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.280478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.280645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.280816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.280998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.281034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.281243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.281278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.281450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.281485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.281596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.281630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.281798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.281832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.282084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.282123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.282346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.282380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.370 qpair failed and we were unable to recover it. 00:27:28.370 [2024-12-11 15:08:21.282573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.370 [2024-12-11 15:08:21.282607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.282746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.282780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.282977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.284270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.284306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.284445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.284481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.284686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.284721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.284901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.284937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.285882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.285983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.286125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.286334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.286536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.286680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.286819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.286852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.287078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.287121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.287301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.287337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.287521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.287554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.287725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.287759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.287871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.287905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.288008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.288042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.288229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.288265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.288466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.288499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.288626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.371 [2024-12-11 15:08:21.288660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.371 qpair failed and we were unable to recover it. 00:27:28.371 [2024-12-11 15:08:21.288774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.288808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.288939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.288972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.289144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.289190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.289359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.289392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.289495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.289530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.289650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.289684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.289925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.289959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.290104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.290143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.290284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.290331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.290614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.290660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.290909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.290960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.291203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.291246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.291425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.291466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.291581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.291618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.291814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.292010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.292072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.292342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.292381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.292486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.292521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.292628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.292662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.292955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.292988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.293102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.293135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.293314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.293347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.293545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.293579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.293769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.293804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.293922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.293957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.294947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.294980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.295175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.295210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.295429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.295649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.295890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.296191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.296226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.296345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.296378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.296569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.296811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.296844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.372 qpair failed and we were unable to recover it. 00:27:28.372 [2024-12-11 15:08:21.297040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.372 [2024-12-11 15:08:21.297072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.297259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.297293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.297531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.297564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.297669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.297702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.297815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.297849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.298887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.298920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.299857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.299891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.300080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.300114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.300272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.300306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.300475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.300507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.300708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.300741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.300865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.300899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.301009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.301227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.301269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.301440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.301474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.301593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.301626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.301844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.301878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.302912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.302945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.303140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.303187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.303356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.303389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.303506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.303539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.303730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.303764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.303934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.303968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.304212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.373 [2024-12-11 15:08:21.304253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.373 qpair failed and we were unable to recover it. 00:27:28.373 [2024-12-11 15:08:21.304452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.304485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.304605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.304640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.304764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.304799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.304923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.304956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.305135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.305184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.305311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.305344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.305449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.305483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.305669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.305707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.305831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.305866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.306892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.306927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.307952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.307986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.308182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.308218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.308388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.308421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.308593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.308626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.308748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.374 [2024-12-11 15:08:21.308782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.308895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.308927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:28.374 [2024-12-11 15:08:21.309115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.309149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.309346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.374 [2024-12-11 15:08:21.309380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.309498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.309531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.374 [2024-12-11 15:08:21.309746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.309780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.309986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.310871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.310983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.311015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.374 qpair failed and we were unable to recover it. 00:27:28.374 [2024-12-11 15:08:21.311244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.374 [2024-12-11 15:08:21.311278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.311493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.311527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.311716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.311951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.311984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.312108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.312141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.312277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.312311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.312430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.312462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.312699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.312732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.312998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.313210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.313246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.313430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.313463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.313602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.313636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.313757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.313789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.313970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.314895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.314928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.315104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.315137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.315362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.315397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.315521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.315554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.315726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.315760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.315874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.315907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.316960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.316994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.317104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.317137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.317288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.317323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.317500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.317534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.317656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.317689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.317902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.317936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.318206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.318241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.375 qpair failed and we were unable to recover it. 00:27:28.375 [2024-12-11 15:08:21.318377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.375 [2024-12-11 15:08:21.318415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.318542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.318577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.318717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.318750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.318867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.318901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.319030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.319068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.319241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.319277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.319459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.319492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.319628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.319661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.319850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.319883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.320140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.320184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.320357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.320390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.320495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.320536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.320737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.320770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.320943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.320978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.321146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.321188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.321360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.321393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.321562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.321595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.321717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.321749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.321948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.321982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.322177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.322211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.322320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.322482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.322515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.322684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.322718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.376 [2024-12-11 15:08:21.322821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.376 [2024-12-11 15:08:21.322854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.376 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.323093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.323350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.323384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.323622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.323655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.323779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.323812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.324076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.324109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.324264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.324299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.324490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.324523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.324716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.324749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.324926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.324958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.325181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.325217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.325337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.325370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.325576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.325611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.325825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.325859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.326032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.326065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.326188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.326223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.326391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.326423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.326596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.326630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.326886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.326921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.327049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.327082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.327186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.327220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.327346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.327381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.327574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.327607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.327796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.327829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.328015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.328048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.328193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.328228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.328418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.328451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.328629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.328663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.328866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.328903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.329119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.329152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.329348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.329381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.329503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.329537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.329704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.640 [2024-12-11 15:08:21.329737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.640 qpair failed and we were unable to recover it. 00:27:28.640 [2024-12-11 15:08:21.329854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.329886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.330055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.330087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.330270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.330304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.330427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.330459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.330567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.330601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.330794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.330827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.331066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.331099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.331257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.331291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.331474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.331506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.331788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.331822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.332078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.332112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.332320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.332355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.332546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.332578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.332818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.332852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.332976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.333009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.333216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.333250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.333467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.333502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.333674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.333708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.333822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.333855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.334093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.334126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.334378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.334413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.334546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.334580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.334720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.334754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.334950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.334985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.335091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.335122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.335399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.335435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.335562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.335596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.335714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.335748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.335867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.335900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.336184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.336220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.336378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.336418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.336683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.336718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.336933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.336969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.337239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.337276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.337402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.337437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.337612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.337653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.337878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.337911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.338080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.338114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.338290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.641 [2024-12-11 15:08:21.338323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.641 qpair failed and we were unable to recover it. 00:27:28.641 [2024-12-11 15:08:21.338604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.338637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.338878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.338911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.339089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.339122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.339255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.339289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.339563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.339597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.339893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.339926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.340131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.340188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 Malloc0 00:27:28.642 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.642 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:28.642 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.642 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.642 [2024-12-11 15:08:21.342453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.342506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.342833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.342869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.343070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.343104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.343270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.343305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.343497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.343528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.343786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.343819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.343952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.343985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.344100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.344133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.344327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.344360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.344563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.344669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.344701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.344938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.345080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.345112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.345294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.345329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.345530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.345682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.345714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.346038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.346254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.346290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.346508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.346540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.346791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.346823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.347101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.347135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.347355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.347388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.347512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.347545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.347723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.347756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.347945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.347977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.348120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.642 [2024-12-11 15:08:21.348144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.348221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.348485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.348517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.348688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.348727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.348938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.348971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.349141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.642 [2024-12-11 15:08:21.349185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.642 qpair failed and we were unable to recover it. 00:27:28.642 [2024-12-11 15:08:21.349448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.349482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.349606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.349639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.349757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.349790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.349965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.349997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.350281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.350315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.350580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.350613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.350811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.350843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.351012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.351044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.351222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.351255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.351426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.351459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.351658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.351690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.351970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.352003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.352119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.352150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.352426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.352459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.352573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.352607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.352878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.352911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.353095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.353126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.353372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.353440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.353591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.353814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.353849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.354059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.354094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.354294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.354331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.354502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.354537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.354810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.355096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.355169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce8000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.355467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.355532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9ce0000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.355824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.355860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.356038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.356071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.643 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:28.643 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.643 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.643 [2024-12-11 15:08:21.357956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.358006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.358314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.358352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.358556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.358590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.358853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.358885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.359104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.359138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.359258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.359292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.359537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.359570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.359740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.359773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.360044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.360077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.643 qpair failed and we were unable to recover it. 00:27:28.643 [2024-12-11 15:08:21.360268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.643 [2024-12-11 15:08:21.360303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.360498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.360531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.360781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.360813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.361009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.361043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.361179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.361214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.361471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.361504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.361672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.361704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.361943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.361976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.362145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.362192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.362364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.362398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.362671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.362704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.362876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.362909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.363087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.363119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.363395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.363430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.363669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.363702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.363962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.363994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.364191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.364225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.364395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.364428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.364631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.364663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.364851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.364884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b9 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.644 0 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.365066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.365099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.365232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.365265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.365394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.365425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.644 [2024-12-11 15:08:21.365671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.365704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.644 [2024-12-11 15:08:21.365882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.365916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.366120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.366308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.366342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.366585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.366618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.366785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.366818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.366926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.366958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.367127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.367167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.367343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.367376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.644 qpair failed and we were unable to recover it. 00:27:28.644 [2024-12-11 15:08:21.367507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.644 [2024-12-11 15:08:21.367541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.367731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.367765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.367868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.367901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.368029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.368063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.368252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.368285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.368484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.368517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.368630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.368663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.368832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.368865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.369101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.369134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.369325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.369358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.369616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.369648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.369933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.369967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.370085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.370118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.370428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.370669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.370702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.370875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.370908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.371115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.371148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.371317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9cdc000b90 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.371518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.371569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.371717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.371979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.372013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.372185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.372222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.372482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.372515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.372833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.645 [2024-12-11 15:08:21.373002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.373036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.373249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.373286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.373407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.373440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.645 [2024-12-11 15:08:21.373699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.373734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.645 [2024-12-11 15:08:21.373974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.374009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.374194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.374231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.374346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.374380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.374643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.374676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.374882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.374916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.375204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.375240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.375360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.375394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.375599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.375632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.375749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.375782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.645 qpair failed and we were unable to recover it. 00:27:28.645 [2024-12-11 15:08:21.375894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.645 [2024-12-11 15:08:21.375927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.376172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.646 [2024-12-11 15:08:21.376207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14ccbe0 with addr=10.0.0.2, port=4420 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.376364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.646 [2024-12-11 15:08:21.378843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.378975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.379024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.379048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.379070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.379120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.646 [2024-12-11 15:08:21.388707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.388815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.388857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.388881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.388900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.646 [2024-12-11 15:08:21.388945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 15:08:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3271567 00:27:28.646 [2024-12-11 15:08:21.398717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.398805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.398833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.398847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.398861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.398890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.408782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.408880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.408900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.408910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.408920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.408941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.418727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.418783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.418801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.418809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.418819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.418836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.428705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.428766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.428785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.428794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.428800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.428818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.438713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.438772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.438788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.438795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.438802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.438818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.448780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.448878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.448894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.448901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.448907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.448923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.458840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.458898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.458913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.458920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.458926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.458941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.468844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.468904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.468919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.468926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.468933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.468948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.478881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.478937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.478952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.478960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.478966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.478981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.646 [2024-12-11 15:08:21.488908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.646 [2024-12-11 15:08:21.488992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.646 [2024-12-11 15:08:21.489012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.646 [2024-12-11 15:08:21.489019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.646 [2024-12-11 15:08:21.489025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.646 [2024-12-11 15:08:21.489040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.646 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.498912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.499013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.499028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.499035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.499041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.499056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.508931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.508985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.509003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.509010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.509017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.509033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.518926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.518982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.518997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.519004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.519010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.519025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.528988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.529045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.529060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.529068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.529075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.529091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.539017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.539076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.539091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.539098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.539104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.539119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.549051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.549109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.549126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.549133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.549143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.549163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.559083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.559156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.559175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.559182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.559189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.559205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.569109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.569169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.569184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.569191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.569198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.569214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.579172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.579229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.579244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.579251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.579258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.579273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.589169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.589228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.589243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.589250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.589257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.589272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.599183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.599241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.599258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.599266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.599273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.599289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.609293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.609370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.609385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.609393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.609399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.609414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.619291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.619367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.619382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.647 [2024-12-11 15:08:21.619389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.647 [2024-12-11 15:08:21.619396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.647 [2024-12-11 15:08:21.619411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.647 qpair failed and we were unable to recover it. 00:27:28.647 [2024-12-11 15:08:21.629244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.647 [2024-12-11 15:08:21.629328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.647 [2024-12-11 15:08:21.629345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.629352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.629359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.629376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-11 15:08:21.639306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-12-11 15:08:21.639376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-12-11 15:08:21.639395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.639403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.639409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.639425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-11 15:08:21.649355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-12-11 15:08:21.649417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-12-11 15:08:21.649433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.649441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.649447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.649463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-11 15:08:21.659426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-12-11 15:08:21.659497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-12-11 15:08:21.659512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.659519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.659526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.659542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-11 15:08:21.669387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-12-11 15:08:21.669445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-12-11 15:08:21.669459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.669467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.669474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.669488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.648 [2024-12-11 15:08:21.679421] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.648 [2024-12-11 15:08:21.679477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.648 [2024-12-11 15:08:21.679495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.648 [2024-12-11 15:08:21.679504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.648 [2024-12-11 15:08:21.679517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.648 [2024-12-11 15:08:21.679535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.648 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.689468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.689528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.689547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.689555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.689562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.689579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.699486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.699543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.699562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.699570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.699576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.699593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.709506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.709563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.709578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.709586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.709592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.709608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.719460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.719519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.719533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.719540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.719547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.719563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.729580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.729639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.729655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.729663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.729670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.729685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.739594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.739662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.739677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.739684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.739690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.739705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.749621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.749691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.749706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.749714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.749721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.749736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.759702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.759759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.759773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.759780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.759787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.759802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.769719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.769817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.769835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.769843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.769849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.769864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.779705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.779761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.779775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.779782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.779789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.779804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.789718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.789776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.789790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.789798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.789804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.789819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.799762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.799817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.799831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.799838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.799845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.908 [2024-12-11 15:08:21.799860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.908 qpair failed and we were unable to recover it. 00:27:28.908 [2024-12-11 15:08:21.809787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.908 [2024-12-11 15:08:21.809846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.908 [2024-12-11 15:08:21.809860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.908 [2024-12-11 15:08:21.809868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.908 [2024-12-11 15:08:21.809878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.809893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.819825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.819897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.819912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.819919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.819925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.819941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.829837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.829901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.829916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.829924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.829930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.829945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.839892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.839986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.840000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.840007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.840013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.840028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.849902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.849962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.849978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.849985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.849991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.850006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.859923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.859984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.859999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.860006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.860012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.860027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.869935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.869992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.870006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.870014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.870021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.870037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.879998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.880070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.880085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.880092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.880098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.880114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.890009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.890068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.890083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.890090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.890097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.890112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.900039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.900114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.900132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.900140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.900146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.900165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.910095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.910145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.910223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.910231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.910237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.910253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.920082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.920136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.920150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.920161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.920167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.920182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.930125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.930189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.930205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.930212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.930219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.930235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.940122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.909 [2024-12-11 15:08:21.940186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.909 [2024-12-11 15:08:21.940200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.909 [2024-12-11 15:08:21.940208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.909 [2024-12-11 15:08:21.940218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.909 [2024-12-11 15:08:21.940233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.909 qpair failed and we were unable to recover it. 00:27:28.909 [2024-12-11 15:08:21.950180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.910 [2024-12-11 15:08:21.950246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.910 [2024-12-11 15:08:21.950265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.910 [2024-12-11 15:08:21.950273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.910 [2024-12-11 15:08:21.950279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:28.910 [2024-12-11 15:08:21.950301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.910 qpair failed and we were unable to recover it. 00:27:29.169 [2024-12-11 15:08:21.960275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.169 [2024-12-11 15:08:21.960382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.169 [2024-12-11 15:08:21.960401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.169 [2024-12-11 15:08:21.960409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.169 [2024-12-11 15:08:21.960416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.169 [2024-12-11 15:08:21.960433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.169 qpair failed and we were unable to recover it. 00:27:29.169 [2024-12-11 15:08:21.970301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.169 [2024-12-11 15:08:21.970399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.169 [2024-12-11 15:08:21.970414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.169 [2024-12-11 15:08:21.970421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.169 [2024-12-11 15:08:21.970428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.169 [2024-12-11 15:08:21.970443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.169 qpair failed and we were unable to recover it. 00:27:29.169 [2024-12-11 15:08:21.980267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.169 [2024-12-11 15:08:21.980326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.169 [2024-12-11 15:08:21.980340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.169 [2024-12-11 15:08:21.980348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.169 [2024-12-11 15:08:21.980355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.169 [2024-12-11 15:08:21.980370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.169 qpair failed and we were unable to recover it. 00:27:29.169 [2024-12-11 15:08:21.990305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.169 [2024-12-11 15:08:21.990388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.169 [2024-12-11 15:08:21.990403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.169 [2024-12-11 15:08:21.990410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.169 [2024-12-11 15:08:21.990416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.169 [2024-12-11 15:08:21.990431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.169 qpair failed and we were unable to recover it. 00:27:29.169 [2024-12-11 15:08:22.000303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.169 [2024-12-11 15:08:22.000364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.169 [2024-12-11 15:08:22.000379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.169 [2024-12-11 15:08:22.000386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.169 [2024-12-11 15:08:22.000393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.169 [2024-12-11 15:08:22.000407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.010355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.010414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.010429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.010437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.010444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.010460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.020374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.020457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.020470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.020478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.020484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.020499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.030394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.030448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.030467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.030475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.030481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.030496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.040415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.040476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.040490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.040498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.040504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.040520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.050464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.050521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.050536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.050543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.050550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.050565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.060451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.060508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.060523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.060530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.060536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.060550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.070443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.070499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.070514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.070521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.070531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.070546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.080537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.080598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.080613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.080620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.080627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.080642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.090520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.090578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.090593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.090600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.090606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.090621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.100630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.100692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.100707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.100718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.100728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.100744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.110634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.110693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.110709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.110716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.110723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.110739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.120665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.120723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.120738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.120746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.120752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.120768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.130696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.130753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.130768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.170 [2024-12-11 15:08:22.130775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.170 [2024-12-11 15:08:22.130782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.170 [2024-12-11 15:08:22.130797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.170 qpair failed and we were unable to recover it. 00:27:29.170 [2024-12-11 15:08:22.140726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.170 [2024-12-11 15:08:22.140803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.170 [2024-12-11 15:08:22.140818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.140825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.140832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.140848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.150738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.150820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.150835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.150843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.150850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.150865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.160752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.160806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.160824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.160831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.160837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.160853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.170805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.170865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.170880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.170888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.170894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.170909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.180781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.180836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.180851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.180859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.180866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.180881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.190978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.191087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.191101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.191109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.191116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.191131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.200941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.200999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.201013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.201020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.201030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.201044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.171 [2024-12-11 15:08:22.210958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.171 [2024-12-11 15:08:22.211015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.171 [2024-12-11 15:08:22.211034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.171 [2024-12-11 15:08:22.211043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.171 [2024-12-11 15:08:22.211050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.171 [2024-12-11 15:08:22.211067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.171 qpair failed and we were unable to recover it. 00:27:29.430 [2024-12-11 15:08:22.221013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.221072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.221091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.221100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.221107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.221125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.230993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.231084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.231100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.231108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.231114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.231131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.241031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.241092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.241108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.241115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.241121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.241137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.251021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.251080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.251095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.251103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.251110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.251125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.261082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.261165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.261181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.261188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.261195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.261210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.271062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.271118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.271133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.271140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.271147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.271166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.281126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.281196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.281211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.281218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.281225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.281241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.291122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.291202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.291221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.291228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.291235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.291250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.301175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.301236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.301251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.301259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.301266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.301281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.311199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.311257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.311272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.311280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.311286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.311301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.321228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.321281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.321296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.321304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.321310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.321326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.331287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.331347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.331364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.331372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.331385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.331403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.341235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.341296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.431 [2024-12-11 15:08:22.341311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.431 [2024-12-11 15:08:22.341319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.431 [2024-12-11 15:08:22.341326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.431 [2024-12-11 15:08:22.341342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.431 qpair failed and we were unable to recover it. 00:27:29.431 [2024-12-11 15:08:22.351248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.431 [2024-12-11 15:08:22.351300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.351315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.351322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.351329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.351345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.361351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.361402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.361417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.361424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.361431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.361445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.371329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.371390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.371404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.371411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.371418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.371433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.381391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.381450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.381464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.381473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.381479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.381494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.391417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.391474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.391490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.391497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.391503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.391518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.401398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.401491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.401506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.401513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.401519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.401534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.411504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.411560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.411575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.411582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.411588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.411603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.421508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.421560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.421578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.421585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.421591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.421606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.431529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.431585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.431600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.431607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.431613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.431629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.441560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.441621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.441635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.441643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.441649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.441664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.451591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.451663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.451679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.451686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.451692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.451708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.461568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.461625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.461640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.461647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.461657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.461672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.432 [2024-12-11 15:08:22.471667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.432 [2024-12-11 15:08:22.471723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.432 [2024-12-11 15:08:22.471742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.432 [2024-12-11 15:08:22.471749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.432 [2024-12-11 15:08:22.471756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.432 [2024-12-11 15:08:22.471774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.432 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.481699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.481757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.481776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.481785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.481791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.692 [2024-12-11 15:08:22.481809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.491716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.491776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.491791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.491798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.491804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.692 [2024-12-11 15:08:22.491820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.501752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.501806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.501821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.501828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.501835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.692 [2024-12-11 15:08:22.501850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.511751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.511838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.511853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.511860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.511866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.692 [2024-12-11 15:08:22.511881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.521804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.521861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.521877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.521884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.521890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.692 [2024-12-11 15:08:22.521905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.692 qpair failed and we were unable to recover it. 00:27:29.692 [2024-12-11 15:08:22.531868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.692 [2024-12-11 15:08:22.531964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.692 [2024-12-11 15:08:22.531980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.692 [2024-12-11 15:08:22.531987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.692 [2024-12-11 15:08:22.531994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.532009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.541862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.541916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.541931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.541939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.541945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.541961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.551882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.551961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.551979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.551987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.551993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.552007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.561913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.561966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.561980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.561988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.561994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.562009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.571889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.571945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.571959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.571966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.571972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.571988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.581977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.582028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.582042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.582049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.582055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.582070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.592023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.592093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.592110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.592117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.592127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.592143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.602077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.602141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.602161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.602169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.602175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.602190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.612105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.612180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.612194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.612201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.612208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.612223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.622089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.622170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.622185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.622192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.622198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.622212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.632173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.632242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.632257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.632264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.632271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.632287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.642184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.642291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.642306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.642314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.642320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.642335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.652194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.652266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.652281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.652288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.652295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.652310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.662196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.662255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.693 [2024-12-11 15:08:22.662269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.693 [2024-12-11 15:08:22.662276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.693 [2024-12-11 15:08:22.662283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.693 [2024-12-11 15:08:22.662298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.693 qpair failed and we were unable to recover it. 00:27:29.693 [2024-12-11 15:08:22.672222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.693 [2024-12-11 15:08:22.672277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.672292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.672299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.672305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.672321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.682250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.682308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.682326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.682334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.682340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.682355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.692286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.692347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.692361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.692368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.692375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.692389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.702319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.702377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.702391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.702398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.702405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.702420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.712410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.712478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.712493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.712500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.712507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.712522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.722368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.722425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.722440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.722447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.722456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.722473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.694 [2024-12-11 15:08:22.732412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.694 [2024-12-11 15:08:22.732481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.694 [2024-12-11 15:08:22.732497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.694 [2024-12-11 15:08:22.732504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.694 [2024-12-11 15:08:22.732511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.694 [2024-12-11 15:08:22.732526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.694 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.742479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.742589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.742609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.742618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.742625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.742643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.752458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.752517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.752534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.752542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.752549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.752565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.762471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.762526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.762541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.762549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.762556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.762572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.772516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.772612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.772627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.772635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.772641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.772656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.782526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.782634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.782649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.782656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.782662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.782678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.792630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.792687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.792701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.792708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.792715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.792729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.802641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.802740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.802754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.802761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.802767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.802782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.812672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.812729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.812747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.812755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.812761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.812776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.822652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.822721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.822736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.822743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.822750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.822765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.832715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.832771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.832786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.832794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.832800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.832816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.842765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.953 [2024-12-11 15:08:22.842818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.953 [2024-12-11 15:08:22.842833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.953 [2024-12-11 15:08:22.842840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.953 [2024-12-11 15:08:22.842847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.953 [2024-12-11 15:08:22.842862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.953 qpair failed and we were unable to recover it. 00:27:29.953 [2024-12-11 15:08:22.852734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.852795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.852810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.852817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.852827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.852842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.862796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.862903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.862918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.862925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.862932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.862947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.872825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.872906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.872921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.872928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.872935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.872950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.882860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.882912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.882926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.882934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.882941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.882956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.892852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.892911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.892925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.892933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.892939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.892954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.902902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.903010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.903025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.903032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.903038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.903053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.912952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.913008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.913022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.913031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.913037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.913052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.922920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.922973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.922986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.922994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.923000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.923015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.932947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.933006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.933023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.933031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.933037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.933053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.942985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.943052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.943071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.943079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.943085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.943101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.953006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.953062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.953077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.953084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.953091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.953106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.963030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.963108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.963123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.963131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.963137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.963152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.973068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.973139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.973154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.954 [2024-12-11 15:08:22.973167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.954 [2024-12-11 15:08:22.973174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.954 [2024-12-11 15:08:22.973189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.954 qpair failed and we were unable to recover it. 00:27:29.954 [2024-12-11 15:08:22.983151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.954 [2024-12-11 15:08:22.983222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.954 [2024-12-11 15:08:22.983237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.955 [2024-12-11 15:08:22.983244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.955 [2024-12-11 15:08:22.983256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.955 [2024-12-11 15:08:22.983273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.955 qpair failed and we were unable to recover it. 00:27:29.955 [2024-12-11 15:08:22.993069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.955 [2024-12-11 15:08:22.993169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.955 [2024-12-11 15:08:22.993184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.955 [2024-12-11 15:08:22.993191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.955 [2024-12-11 15:08:22.993198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:29.955 [2024-12-11 15:08:22.993212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.955 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.003079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.003145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.003172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.003181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.003187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.003205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.013176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.013246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.013262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.013270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.013278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.013295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.023235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.023299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.023314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.023322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.023328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.023343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.033227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.033285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.033301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.033309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.033315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.033330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.043183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.043234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.043249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.043257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.043264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.043279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.053298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.053403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.053419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.053426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.053433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.053448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.214 [2024-12-11 15:08:23.063339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.214 [2024-12-11 15:08:23.063397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.214 [2024-12-11 15:08:23.063413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.214 [2024-12-11 15:08:23.063420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.214 [2024-12-11 15:08:23.063426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.214 [2024-12-11 15:08:23.063441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.214 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.073342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.073406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.073425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.073433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.073440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.073455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.083394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.083451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.083465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.083473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.083479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.083494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.093403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.093463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.093478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.093485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.093492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.093506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.103424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.103479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.103493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.103501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.103507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.103522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.113469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.113571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.113586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.113593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.113602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.113617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.123514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.123574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.123589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.123596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.123603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.123618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.133565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.133620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.133635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.133643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.133649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.133664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.143557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.143623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.143638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.143646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.143652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.143668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.153598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.153665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.153679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.153687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.153693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.153708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.163653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.163710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.163724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.163732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.163739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.163753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.173634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.173699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.173714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.173722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.173728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.173743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.183680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.183735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.183749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.183756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.183762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.183777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.193667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.193725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.215 [2024-12-11 15:08:23.193739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.215 [2024-12-11 15:08:23.193747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.215 [2024-12-11 15:08:23.193753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.215 [2024-12-11 15:08:23.193767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.215 qpair failed and we were unable to recover it. 00:27:30.215 [2024-12-11 15:08:23.203698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.215 [2024-12-11 15:08:23.203758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.203775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.203783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.203789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.203803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.216 [2024-12-11 15:08:23.213748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.216 [2024-12-11 15:08:23.213809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.213823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.213830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.213837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.213852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.216 [2024-12-11 15:08:23.223771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.216 [2024-12-11 15:08:23.223830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.223845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.223852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.223858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.223873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.216 [2024-12-11 15:08:23.233791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.216 [2024-12-11 15:08:23.233843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.233859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.233866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.233873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.233887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.216 [2024-12-11 15:08:23.243829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.216 [2024-12-11 15:08:23.243885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.243899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.243910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.243917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.243933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.216 [2024-12-11 15:08:23.253841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.216 [2024-12-11 15:08:23.253937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.216 [2024-12-11 15:08:23.253953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.216 [2024-12-11 15:08:23.253960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.216 [2024-12-11 15:08:23.253966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.216 [2024-12-11 15:08:23.253982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.216 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.263937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.264007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.264027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.264035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.264042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.264059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.273956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.274018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.274037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.274045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.274052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.274068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.284006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.284104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.284120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.284127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.284133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.284149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.293986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.294082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.294097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.294105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.294111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.294126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.303969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.304026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.304042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.304049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.304055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.304070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.313991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.314046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.314062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.314069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.314076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.314091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.324046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.324099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.324115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.324122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.324129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.324144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.334086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.334148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.334173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.334181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.334187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.334203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.344129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.344194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.344210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.344217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.344224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.344239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.354142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.354201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.354217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.354225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.354232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.354247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.364102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.364193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.364209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.364216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.364222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.476 [2024-12-11 15:08:23.364237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.476 qpair failed and we were unable to recover it. 00:27:30.476 [2024-12-11 15:08:23.374189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.476 [2024-12-11 15:08:23.374249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.476 [2024-12-11 15:08:23.374263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.476 [2024-12-11 15:08:23.374274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.476 [2024-12-11 15:08:23.374281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.374295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.384238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.384293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.384308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.384315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.384322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.384338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.394229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.394282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.394296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.394303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.394310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.394325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.404206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.404266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.404281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.404289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.404295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.404310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.414333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.414390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.414405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.414412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.414419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.414434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.424347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.424398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.424413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.424421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.424428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.424443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.434356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.434411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.434426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.434434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.434440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.434456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.444386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.444443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.444458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.444465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.444472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.444487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.454437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.454496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.454510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.454517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.454524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.454538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.464459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.464517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.464535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.464542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.464549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.464563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.474426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.474498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.474512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.474519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.474526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.474541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.484499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.484554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.484567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.484575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.484581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.484596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.494543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.494627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.494641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.494648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.494655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.494670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.504547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.477 [2024-12-11 15:08:23.504602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.477 [2024-12-11 15:08:23.504616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.477 [2024-12-11 15:08:23.504628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.477 [2024-12-11 15:08:23.504634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.477 [2024-12-11 15:08:23.504649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.477 qpair failed and we were unable to recover it. 00:27:30.477 [2024-12-11 15:08:23.514627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.478 [2024-12-11 15:08:23.514681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.478 [2024-12-11 15:08:23.514695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.478 [2024-12-11 15:08:23.514703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.478 [2024-12-11 15:08:23.514710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.478 [2024-12-11 15:08:23.514726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.478 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.524663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.524718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.524737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.524746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.524753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.524770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.534670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.534748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.534768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.534777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.534784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.534801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.544722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.544822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.544840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.544850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.544858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.544875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.554745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.554823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.554837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.554845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.554852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.554868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.564766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.564826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.564840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.564847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.564853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.564868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.574830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.574936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.574950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.574957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.574963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.737 [2024-12-11 15:08:23.574978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.737 qpair failed and we were unable to recover it. 00:27:30.737 [2024-12-11 15:08:23.584816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.737 [2024-12-11 15:08:23.584873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.737 [2024-12-11 15:08:23.584887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.737 [2024-12-11 15:08:23.584894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.737 [2024-12-11 15:08:23.584901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.584916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.594774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.594836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.594853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.594861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.594868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.594884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.604920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.605015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.605030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.605037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.605043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.605058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.614886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.614944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.614958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.614966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.614972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.614987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.624942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.625006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.625021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.625028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.625034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.625050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.634901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.634961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.634976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.634987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.634993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.635009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.645015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.645083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.645099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.645106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.645112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.645127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.655082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.655168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.655184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.655192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.655198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.655213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.665059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.665123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.665138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.665145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.665151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.665172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.675130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.675231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.675248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.675256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.675262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.675278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.685103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.685169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.685185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.685192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.685198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.685213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.695132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.695224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.695239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.695246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.695252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.695267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.705152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.705214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.705229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.705236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.705242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.705257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.715237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.738 [2024-12-11 15:08:23.715340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.738 [2024-12-11 15:08:23.715354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.738 [2024-12-11 15:08:23.715361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.738 [2024-12-11 15:08:23.715368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.738 [2024-12-11 15:08:23.715383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.738 qpair failed and we were unable to recover it. 00:27:30.738 [2024-12-11 15:08:23.725214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.725296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.725310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.725317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.725323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.725338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-12-11 15:08:23.735322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.735400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.735416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.735424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.735430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.735445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-12-11 15:08:23.745220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.745283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.745297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.745305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.745311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.745326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-12-11 15:08:23.755322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.755391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.755406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.755413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.755419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.755435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-12-11 15:08:23.765244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.765331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.765346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.765357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.765364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.765379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.739 [2024-12-11 15:08:23.775364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.739 [2024-12-11 15:08:23.775425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.739 [2024-12-11 15:08:23.775439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.739 [2024-12-11 15:08:23.775446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.739 [2024-12-11 15:08:23.775452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.739 [2024-12-11 15:08:23.775467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.739 qpair failed and we were unable to recover it. 00:27:30.998 [2024-12-11 15:08:23.785387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.998 [2024-12-11 15:08:23.785442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.998 [2024-12-11 15:08:23.785461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.998 [2024-12-11 15:08:23.785470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.998 [2024-12-11 15:08:23.785476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.998 [2024-12-11 15:08:23.785494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.998 qpair failed and we were unable to recover it. 00:27:30.998 [2024-12-11 15:08:23.795348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.998 [2024-12-11 15:08:23.795406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.998 [2024-12-11 15:08:23.795425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.998 [2024-12-11 15:08:23.795433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.998 [2024-12-11 15:08:23.795439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.998 [2024-12-11 15:08:23.795456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.998 qpair failed and we were unable to recover it. 00:27:30.998 [2024-12-11 15:08:23.805390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.805447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.805463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.805470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.805476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.805493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.815470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.815543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.815558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.815565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.815572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.815587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.825538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.825600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.825614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.825621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.825627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.825642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.835567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.835628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.835644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.835652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.835658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.835674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.845545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.845623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.845639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.845646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.845652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.845668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.855598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.855703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.855718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.855725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.855731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.855746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.865555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.865613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.865627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.865634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.865641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.865655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.875584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.875644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.875659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.875667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.875673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.875688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.885653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.885707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.885721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.885728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.885734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.885749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.895664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.895722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.895736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.895749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.895755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.895770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.905741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.905798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.905812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.905819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.905825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.905839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.915685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.915741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.915755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.915763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.915770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.915785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.925762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.925816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.999 [2024-12-11 15:08:23.925831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.999 [2024-12-11 15:08:23.925838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.999 [2024-12-11 15:08:23.925844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:30.999 [2024-12-11 15:08:23.925859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.999 qpair failed and we were unable to recover it. 00:27:30.999 [2024-12-11 15:08:23.935781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.999 [2024-12-11 15:08:23.935881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.935896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.935904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.935910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.935928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.945804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.945861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.945876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.945883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.945890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.945906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.955868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.955934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.955951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.955958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.955964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.955980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.965902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.965962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.965977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.965984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.965990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.966005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.975907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.975965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.975980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.975987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.975994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.976009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.985943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.986005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.986019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.986027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.986033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.986048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:23.995971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:23.996040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:23.996055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:23.996063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:23.996069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:23.996084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:24.005995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:24.006053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:24.006068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:24.006075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:24.006081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:24.006096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:24.016019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:24.016110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:24.016124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:24.016131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:24.016137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:24.016153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:24.026108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:24.026177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:24.026192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:24.026202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:24.026209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:24.026224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.000 [2024-12-11 15:08:24.036098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.000 [2024-12-11 15:08:24.036154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.000 [2024-12-11 15:08:24.036174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.000 [2024-12-11 15:08:24.036181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.000 [2024-12-11 15:08:24.036188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.000 [2024-12-11 15:08:24.036203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.000 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.046155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.046217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.046237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.046245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.046251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.046269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.056150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.056211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.056230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.056238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.056245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.056262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.066169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.066277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.066292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.066299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.066306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.066325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.076195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.076251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.076266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.076273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.076280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.076295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.086228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.086286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.086301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.086308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.086315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.086331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.096290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.096354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.096369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.096377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.260 [2024-12-11 15:08:24.096383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.260 [2024-12-11 15:08:24.096398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.260 qpair failed and we were unable to recover it. 00:27:31.260 [2024-12-11 15:08:24.106277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.260 [2024-12-11 15:08:24.106337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.260 [2024-12-11 15:08:24.106351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.260 [2024-12-11 15:08:24.106358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.106365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.106379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.116312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.116371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.116386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.116394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.116401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.116416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.126370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.126424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.126439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.126446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.126452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.126467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.136420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.136478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.136493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.136501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.136507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.136522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.146390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.146477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.146492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.146499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.146505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.146520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.156426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.156484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.156498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.156508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.156514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.156530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.166514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.166574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.166588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.166595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.166602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.166617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.176568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.176674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.176689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.176696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.176703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.176717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.186452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.186520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.186534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.186542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.186548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.186563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.196541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.196617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.196632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.196639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.196646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.196664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.206572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.206646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.206660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.206668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.206674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.206689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.216638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.216696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.216710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.216717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.216723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.216739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.226631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.226688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.226702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.226709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.226716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.226731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.236659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.261 [2024-12-11 15:08:24.236761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.261 [2024-12-11 15:08:24.236777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.261 [2024-12-11 15:08:24.236784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.261 [2024-12-11 15:08:24.236791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.261 [2024-12-11 15:08:24.236806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.261 qpair failed and we were unable to recover it. 00:27:31.261 [2024-12-11 15:08:24.246670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.246765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.246780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.246787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.246794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.246809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.262 [2024-12-11 15:08:24.256727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.256793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.256809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.256818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.256824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.256840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.262 [2024-12-11 15:08:24.266721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.266779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.266794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.266801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.266809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.266824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.262 [2024-12-11 15:08:24.276766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.276822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.276836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.276844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.276851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.276867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.262 [2024-12-11 15:08:24.286790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.286842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.286856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.286866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.286873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.286888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.262 [2024-12-11 15:08:24.296828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.262 [2024-12-11 15:08:24.296890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.262 [2024-12-11 15:08:24.296905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.262 [2024-12-11 15:08:24.296913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.262 [2024-12-11 15:08:24.296919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.262 [2024-12-11 15:08:24.296934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.262 qpair failed and we were unable to recover it. 00:27:31.521 [2024-12-11 15:08:24.306833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.521 [2024-12-11 15:08:24.306946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.521 [2024-12-11 15:08:24.306966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.521 [2024-12-11 15:08:24.306974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.521 [2024-12-11 15:08:24.306980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.306997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.316895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.316977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.316996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.317004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.317010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.317027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.326959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.327028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.327043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.327050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.327057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.327075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.336951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.337009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.337025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.337033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.337040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.337056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.346962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.347019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.347034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.347041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.347048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.347063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.356968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.357026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.357041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.357049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.357055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.357070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.366971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.367027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.367043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.367050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.367057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.367072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.377058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.377173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.377189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.377196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.377202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.377217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.387132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.387210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.387226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.387234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.387240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.387256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.397124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.397184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.397200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.397210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.397218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.397234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.407156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.407254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.407269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.407276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.407283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.407298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.417104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.417167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.417182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.417193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.417199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.417214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.427215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.427274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.427288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.427295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.427301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.427316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.522 qpair failed and we were unable to recover it. 00:27:31.522 [2024-12-11 15:08:24.437271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.522 [2024-12-11 15:08:24.437351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.522 [2024-12-11 15:08:24.437367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.522 [2024-12-11 15:08:24.437375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.522 [2024-12-11 15:08:24.437381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.522 [2024-12-11 15:08:24.437396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.447252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.447309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.447324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.447332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.447338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.447353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.457310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.457367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.457381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.457388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.457395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.457413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.467357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.467424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.467438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.467445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.467451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.467466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.477427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.477508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.477523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.477531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.477537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.477552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.487415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.487473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.487488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.487496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.487502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.487518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.497476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.497578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.497593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.497600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.497606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.497621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.507441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.507502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.507517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.507525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.507532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.507547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.517506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.517561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.517575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.517582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.517589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.517604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.527514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.527582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.527596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.527603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.527609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.527625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.537595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.537699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.537716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.537723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.537729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.537744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.547560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.547616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.547632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.547642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.547648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.547664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.523 [2024-12-11 15:08:24.557587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.523 [2024-12-11 15:08:24.557645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.523 [2024-12-11 15:08:24.557660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.523 [2024-12-11 15:08:24.557668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.523 [2024-12-11 15:08:24.557674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.523 [2024-12-11 15:08:24.557689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.523 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.567570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.567631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.567651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.567659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.567666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.567684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.577710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.577813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.577832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.577840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.577846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.577863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.587684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.587761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.587777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.587784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.587791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.587810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.597707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.597766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.597784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.597792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.597798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.597815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.607706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.607771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.607786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.607794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.607801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.607816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.617699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.617769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.783 [2024-12-11 15:08:24.617785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.783 [2024-12-11 15:08:24.617792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.783 [2024-12-11 15:08:24.617799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.783 [2024-12-11 15:08:24.617814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.783 qpair failed and we were unable to recover it. 00:27:31.783 [2024-12-11 15:08:24.627795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.783 [2024-12-11 15:08:24.627851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.627865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.627872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.627879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.627894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.637822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.637878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.637894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.637901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.637908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.637923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.647794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.647846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.647861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.647868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.647874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.647889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.657893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.657952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.657967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.657975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.657982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.657996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.667854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.667912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.667926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.667934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.667940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.667955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.677982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.678061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.678076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.678089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.678096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.678111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.687947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.688018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.688032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.688040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.688047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.688062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.698031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.698093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.698109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.698116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.698122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.698137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.708041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.708113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.708128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.708136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.708142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.708160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.718055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.718114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.718129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.718136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.718143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.718166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.728150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.728249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.728265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.728272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.728278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.728294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.738128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.738205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.738219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.738227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.738233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.738249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.748152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.748215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.748230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.748238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.748245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.748260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.784 qpair failed and we were unable to recover it. 00:27:31.784 [2024-12-11 15:08:24.758174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.784 [2024-12-11 15:08:24.758229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.784 [2024-12-11 15:08:24.758244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.784 [2024-12-11 15:08:24.758251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.784 [2024-12-11 15:08:24.758258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.784 [2024-12-11 15:08:24.758273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.768222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.768290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.768304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.768311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.768318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.768333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.778262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.778319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.778333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.778340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.778347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.778361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.788253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.788314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.788329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.788336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.788342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.788357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.798278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.798337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.798352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.798360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.798366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.798381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.808315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.808374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.808388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.808398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.808405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.808421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:31.785 [2024-12-11 15:08:24.818373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.785 [2024-12-11 15:08:24.818477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.785 [2024-12-11 15:08:24.818491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.785 [2024-12-11 15:08:24.818498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.785 [2024-12-11 15:08:24.818504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:31.785 [2024-12-11 15:08:24.818519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.785 qpair failed and we were unable to recover it. 00:27:32.044 [2024-12-11 15:08:24.828370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.044 [2024-12-11 15:08:24.828441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.044 [2024-12-11 15:08:24.828461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.044 [2024-12-11 15:08:24.828472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.044 [2024-12-11 15:08:24.828479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.044 [2024-12-11 15:08:24.828495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.044 qpair failed and we were unable to recover it. 00:27:32.044 [2024-12-11 15:08:24.838388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.044 [2024-12-11 15:08:24.838446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.044 [2024-12-11 15:08:24.838465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.838472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.838480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.838497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.848423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.848475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.848491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.848498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.848505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.848524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.858464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.858538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.858554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.858562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.858568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.858585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.868482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.868535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.868550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.868558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.868564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.868580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.878498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.878552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.878567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.878575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.878581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.878597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.888580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.888633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.888647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.888654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.888661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.888676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.898578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.898677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.898691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.898698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.898705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.898720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.908592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.908647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.908662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.908668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.908675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.908690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.918614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.918668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.918682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.918690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.918697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.918712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.928646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.928703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.928719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.928726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.928733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.928747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.938687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.938748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.938763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.938774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.938780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.938796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.948704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.948793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.948808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.948815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.948821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.948836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.958747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.958803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.958817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.958825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.958831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.045 [2024-12-11 15:08:24.958846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.045 qpair failed and we were unable to recover it. 00:27:32.045 [2024-12-11 15:08:24.968765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.045 [2024-12-11 15:08:24.968822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.045 [2024-12-11 15:08:24.968836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.045 [2024-12-11 15:08:24.968843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.045 [2024-12-11 15:08:24.968850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:24.968865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:24.978741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:24.978833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:24.978848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:24.978856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:24.978862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:24.978880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:24.988765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:24.988844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:24.988859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:24.988867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:24.988873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:24.988888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:24.998784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:24.998842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:24.998856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:24.998864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:24.998870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:24.998885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.008881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.008934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.008948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.008955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.008962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.008977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.018921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.018979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.018994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.019001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.019007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.019022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.028955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.029026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.029041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.029049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.029056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.029071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.038957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.039013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.039028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.039035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.039042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.039057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.048970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.049028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.049042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.049050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.049057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.049072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.059010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.059087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.059102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.059109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.059116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.059131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.068984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.069049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.069067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.069074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.069081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.069095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.079008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.079081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.079096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.079104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.079111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.079126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.046 [2024-12-11 15:08:25.089094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.046 [2024-12-11 15:08:25.089150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.046 [2024-12-11 15:08:25.089175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.046 [2024-12-11 15:08:25.089183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.046 [2024-12-11 15:08:25.089190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.046 [2024-12-11 15:08:25.089209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.046 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.099139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.099204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.099224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.099231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.099239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.099256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.109102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.109163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.109180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.109187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.109193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.109213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.119133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.119191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.119208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.119215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.119222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.119238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.129167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.129217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.129233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.129240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.129247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.129262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.139199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.139259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.139274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.139282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.139289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.139304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.149263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.149323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.149338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.149346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.149352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.149368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.159240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.159303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.159319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.159326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.159333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.159348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.169264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.169334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.169350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.169357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.169364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.169379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.306 [2024-12-11 15:08:25.179362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.306 [2024-12-11 15:08:25.179434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.306 [2024-12-11 15:08:25.179449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.306 [2024-12-11 15:08:25.179457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.306 [2024-12-11 15:08:25.179463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.306 [2024-12-11 15:08:25.179478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.306 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.189409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.189466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.189480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.189488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.189495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.189509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.199485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.199548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.199565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.199573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.199580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.199594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.209497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.209554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.209569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.209576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.209582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.209598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.219480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.219539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.219553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.219560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.219567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.219583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.229514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.229576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.229592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.229600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.229606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.229621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.239539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.239598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.239613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.239620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.239627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.239645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.249556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.249612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.249627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.249634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.249641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.249656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.259586] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.259646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.259661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.259668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.259674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.259691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.269560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.269620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.269635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.269642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.269649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.269664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.279572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.279628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.279642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.279649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.279656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.279670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.289609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.289660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.289675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.289681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.289688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.289704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.299704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.299762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.299776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.299783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.299790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.299805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.309657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.309713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.307 [2024-12-11 15:08:25.309727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.307 [2024-12-11 15:08:25.309734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.307 [2024-12-11 15:08:25.309740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.307 [2024-12-11 15:08:25.309756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.307 qpair failed and we were unable to recover it. 00:27:32.307 [2024-12-11 15:08:25.319692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.307 [2024-12-11 15:08:25.319752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.308 [2024-12-11 15:08:25.319766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.308 [2024-12-11 15:08:25.319773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.308 [2024-12-11 15:08:25.319780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.308 [2024-12-11 15:08:25.319795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.308 qpair failed and we were unable to recover it. 00:27:32.308 [2024-12-11 15:08:25.329771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.308 [2024-12-11 15:08:25.329825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.308 [2024-12-11 15:08:25.329843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.308 [2024-12-11 15:08:25.329851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.308 [2024-12-11 15:08:25.329857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.308 [2024-12-11 15:08:25.329872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.308 qpair failed and we were unable to recover it. 00:27:32.308 [2024-12-11 15:08:25.339747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.308 [2024-12-11 15:08:25.339805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.308 [2024-12-11 15:08:25.339819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.308 [2024-12-11 15:08:25.339827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.308 [2024-12-11 15:08:25.339834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.308 [2024-12-11 15:08:25.339848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.308 qpair failed and we were unable to recover it. 00:27:32.308 [2024-12-11 15:08:25.349778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.308 [2024-12-11 15:08:25.349833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.308 [2024-12-11 15:08:25.349852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.308 [2024-12-11 15:08:25.349860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.308 [2024-12-11 15:08:25.349867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.308 [2024-12-11 15:08:25.349884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.308 qpair failed and we were unable to recover it. 00:27:32.567 [2024-12-11 15:08:25.359865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.567 [2024-12-11 15:08:25.359925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.567 [2024-12-11 15:08:25.359944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.567 [2024-12-11 15:08:25.359952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.567 [2024-12-11 15:08:25.359959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.567 [2024-12-11 15:08:25.359977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.567 qpair failed and we were unable to recover it. 00:27:32.567 [2024-12-11 15:08:25.369892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.369961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.369976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.369984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.369990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.370010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.379936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.379994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.380009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.380016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.380022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.380037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.389885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.389947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.389962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.389969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.389975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.389990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.399980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.400064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.400079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.400086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.400092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.400107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.410015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.410072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.410086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.410094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.410100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.410115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.420060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.420119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.420134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.420141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.420147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.420166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.430032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.430090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.430105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.430112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.430119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.430133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.440037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.440100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.440115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.440122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.440128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.440143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.450119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.450182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.450198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.450205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.450212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.450228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.460211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.460310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.460329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.460336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.460342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.460357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.470202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.470264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.470278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.470286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.470292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.470307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.480221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.480312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.480327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.480334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.480340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.480355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.490238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.490296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.490311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.568 [2024-12-11 15:08:25.490318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.568 [2024-12-11 15:08:25.490324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.568 [2024-12-11 15:08:25.490339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.568 qpair failed and we were unable to recover it. 00:27:32.568 [2024-12-11 15:08:25.500281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.568 [2024-12-11 15:08:25.500339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.568 [2024-12-11 15:08:25.500354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.500361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.500370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.500386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.510323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.510396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.510411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.510418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.510424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.510441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.520384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.520441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.520456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.520463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.520470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.520486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.530362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.530417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.530431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.530439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.530446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.530462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.540381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.540473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.540487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.540494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.540500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.540515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.550441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.550519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.550534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.550541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.550548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.550563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.560455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.560517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.560532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.560539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.560547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.560562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.570539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.570644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.570659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.570666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.570673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.570688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.580523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.580581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.580596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.580602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.580609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.580624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.590548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.590606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.590624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.590632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.590638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.590653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.600638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.600701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.600718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.600727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.600733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.600749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.569 [2024-12-11 15:08:25.610598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.569 [2024-12-11 15:08:25.610655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.569 [2024-12-11 15:08:25.610675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.569 [2024-12-11 15:08:25.610683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.569 [2024-12-11 15:08:25.610690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.569 [2024-12-11 15:08:25.610707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.569 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.620673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.620774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.620794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.620802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.620809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.620826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.630684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.630773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.630789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.630797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.630806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.630822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.640756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.640823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.640838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.640846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.640852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.640867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.650774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.650834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.650849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.650856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.650862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.650878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.660784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.660844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.660858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.660865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.660872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.660888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.670720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.670780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.670794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.670802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.670808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.670823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.680800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.680882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.680897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.680904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.680910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.680925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.690879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.690976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.690991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.690998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.691005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.691020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.700985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.829 [2024-12-11 15:08:25.701057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.829 [2024-12-11 15:08:25.701071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.829 [2024-12-11 15:08:25.701078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.829 [2024-12-11 15:08:25.701085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.829 [2024-12-11 15:08:25.701100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.829 qpair failed and we were unable to recover it. 00:27:32.829 [2024-12-11 15:08:25.710916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.710980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.710995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.711003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.711009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.711024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.720941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.720997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.721015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.721022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.721029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.721044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.730957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.731031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.731047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.731055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.731062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.731077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.741014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.741073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.741088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.741095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.741102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.741117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.751011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.751070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.751086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.751093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.751100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.751115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.761056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.761114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.761129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.761137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.761146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.761165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.771119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.771176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.771192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.771200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.771206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.771222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.781095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.781155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.781175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.781183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.781189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.781205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.791193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.791249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.791263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.791270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.791277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.791292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.801192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.801257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.801272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.801279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.801285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.801299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.811204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.811256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.811271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.811278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.811284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.811300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.821228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.821286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.821300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.821307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.821313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.821328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.831249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.831309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.831324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.830 [2024-12-11 15:08:25.831332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.830 [2024-12-11 15:08:25.831338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.830 [2024-12-11 15:08:25.831353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.830 qpair failed and we were unable to recover it. 00:27:32.830 [2024-12-11 15:08:25.841270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.830 [2024-12-11 15:08:25.841323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.830 [2024-12-11 15:08:25.841337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.831 [2024-12-11 15:08:25.841344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.831 [2024-12-11 15:08:25.841351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.831 [2024-12-11 15:08:25.841366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-12-11 15:08:25.851295] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.831 [2024-12-11 15:08:25.851346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.831 [2024-12-11 15:08:25.851364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.831 [2024-12-11 15:08:25.851372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.831 [2024-12-11 15:08:25.851379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.831 [2024-12-11 15:08:25.851394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-12-11 15:08:25.861330] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.831 [2024-12-11 15:08:25.861389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.831 [2024-12-11 15:08:25.861403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.831 [2024-12-11 15:08:25.861411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.831 [2024-12-11 15:08:25.861417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.831 [2024-12-11 15:08:25.861433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.831 qpair failed and we were unable to recover it. 00:27:32.831 [2024-12-11 15:08:25.871368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.831 [2024-12-11 15:08:25.871425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.831 [2024-12-11 15:08:25.871443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.831 [2024-12-11 15:08:25.871451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.831 [2024-12-11 15:08:25.871459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:32.831 [2024-12-11 15:08:25.871480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.831 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.881387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.881445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.881464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.881472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.881479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.881496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.891441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.891511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.891526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.891533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.891543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.891559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.901495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.901602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.901619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.901627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.901634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.901651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.911513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.911575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.911590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.911597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.911603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.911618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.921501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.921559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.921573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.921581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.921588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.921603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.931530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.931604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.931619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.931627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.931633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.931648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.941566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.941628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.941643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.941650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.941656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.941672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.951621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.951675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.951689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.951696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.951703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.951718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.961637] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.961697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.961711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.961718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.961725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.961739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.971651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.971701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.971715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.971722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.971729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.090 [2024-12-11 15:08:25.971744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.090 qpair failed and we were unable to recover it. 00:27:33.090 [2024-12-11 15:08:25.981681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.090 [2024-12-11 15:08:25.981738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.090 [2024-12-11 15:08:25.981755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.090 [2024-12-11 15:08:25.981763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.090 [2024-12-11 15:08:25.981769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:25.981784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:25.991700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:25.991757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:25.991770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:25.991779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:25.991785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:25.991801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.001720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.001804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.001820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.001827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.001833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.001848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.011752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.011810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.011824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.011831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.011838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.011853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.021795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.021870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.021885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.021892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.021903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.021918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.031866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.031925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.031940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.031948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.031954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.031969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.041846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.041925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.041940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.041948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.041954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.041969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.051877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.051934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.051949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.051957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.051964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.051979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.061921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.061980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.061995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.062002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.062009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.062023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.071948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.072004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.072019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.072027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.072034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.072049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.081958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.082036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.082050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.082057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.082063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.082079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.092001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.092068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.092083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.092090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.092097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.092112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.102021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.102081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.102096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.102104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.102110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.102125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.112057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.091 [2024-12-11 15:08:26.112113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.091 [2024-12-11 15:08:26.112131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.091 [2024-12-11 15:08:26.112139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.091 [2024-12-11 15:08:26.112145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.091 [2024-12-11 15:08:26.112164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.091 qpair failed and we were unable to recover it. 00:27:33.091 [2024-12-11 15:08:26.122131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.092 [2024-12-11 15:08:26.122191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.092 [2024-12-11 15:08:26.122207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.092 [2024-12-11 15:08:26.122214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.092 [2024-12-11 15:08:26.122220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.092 [2024-12-11 15:08:26.122236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.092 qpair failed and we were unable to recover it. 00:27:33.092 [2024-12-11 15:08:26.132111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.092 [2024-12-11 15:08:26.132188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.092 [2024-12-11 15:08:26.132207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.092 [2024-12-11 15:08:26.132215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.092 [2024-12-11 15:08:26.132221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.092 [2024-12-11 15:08:26.132238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.092 qpair failed and we were unable to recover it. 00:27:33.351 [2024-12-11 15:08:26.142149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.351 [2024-12-11 15:08:26.142219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.351 [2024-12-11 15:08:26.142238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.351 [2024-12-11 15:08:26.142247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.351 [2024-12-11 15:08:26.142254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.351 [2024-12-11 15:08:26.142271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.351 qpair failed and we were unable to recover it. 00:27:33.351 [2024-12-11 15:08:26.152114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.351 [2024-12-11 15:08:26.152178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.351 [2024-12-11 15:08:26.152194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.351 [2024-12-11 15:08:26.152201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.351 [2024-12-11 15:08:26.152211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.351 [2024-12-11 15:08:26.152227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.351 qpair failed and we were unable to recover it. 00:27:33.351 [2024-12-11 15:08:26.162213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.351 [2024-12-11 15:08:26.162269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.351 [2024-12-11 15:08:26.162284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.351 [2024-12-11 15:08:26.162291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.351 [2024-12-11 15:08:26.162298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.351 [2024-12-11 15:08:26.162313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.351 qpair failed and we were unable to recover it. 00:27:33.351 [2024-12-11 15:08:26.172228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.351 [2024-12-11 15:08:26.172281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.351 [2024-12-11 15:08:26.172296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.351 [2024-12-11 15:08:26.172303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.351 [2024-12-11 15:08:26.172310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.351 [2024-12-11 15:08:26.172325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.351 qpair failed and we were unable to recover it. 00:27:33.351 [2024-12-11 15:08:26.182298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.182371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.182386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.182393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.182399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.182415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.192298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.192357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.192372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.192380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.192386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.192401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.202319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.202381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.202396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.202403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.202409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.202425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.212338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.212390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.212404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.212411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.212417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.212432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.222390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.222449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.222463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.222470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.222476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.222491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.232482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.232560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.232575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.232582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.232588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.232603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.242474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.242537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.242555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.242563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.242569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.242584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.252464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.252522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.252536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.252544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.252550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.252565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.262514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.262577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.262590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.262598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.262604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.262619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.272474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.272534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.272548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.272555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.272562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.272576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.282555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.282614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.282628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.282636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.282645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.282660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.292566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.292651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.292666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.292673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.292679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.292695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.302677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.302751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.302765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.302773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.352 [2024-12-11 15:08:26.302779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.352 [2024-12-11 15:08:26.302795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.352 qpair failed and we were unable to recover it. 00:27:33.352 [2024-12-11 15:08:26.312703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.352 [2024-12-11 15:08:26.312770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.352 [2024-12-11 15:08:26.312784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.352 [2024-12-11 15:08:26.312792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.312797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.312812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.322676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.322769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.322783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.322790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.322797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.322812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.332709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.332764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.332779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.332786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.332793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.332809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.342748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.342807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.342822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.342830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.342837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.342852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.352827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.352886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.352901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.352910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.352917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.352932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.362822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.362891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.362906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.362913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.362920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.362935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.372847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.372910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.372929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.372936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.372942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.372957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.382895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.382951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.382966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.382973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.382980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.382995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.353 [2024-12-11 15:08:26.392895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.353 [2024-12-11 15:08:26.392955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.353 [2024-12-11 15:08:26.392974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.353 [2024-12-11 15:08:26.392983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.353 [2024-12-11 15:08:26.392989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.353 [2024-12-11 15:08:26.393005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.353 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.402954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.403013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.403032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.403041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.403047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.403064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.412931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.412990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.413005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.413013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.413023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.413038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.422970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.423066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.423081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.423088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.423095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.423111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.433043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.433098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.433115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.433122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.433129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.433144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.443060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.443116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.443131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.443138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.443144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.443164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.453078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.453162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.453178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.453185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.453191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.453207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.463075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.463135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.463150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.463163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.463169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.463184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.473130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.473189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.473204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.473212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.473218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.473233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.483136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.483195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.483209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.483217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.483224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.483240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.493194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.493300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.493315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.493323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.493329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.493344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.503171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.503231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.503249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.503256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.503262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.503278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.513168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.513224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.613 [2024-12-11 15:08:26.513240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.613 [2024-12-11 15:08:26.513249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.613 [2024-12-11 15:08:26.513256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.613 [2024-12-11 15:08:26.513272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.613 qpair failed and we were unable to recover it. 00:27:33.613 [2024-12-11 15:08:26.523279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.613 [2024-12-11 15:08:26.523383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.523398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.523405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.523412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.523427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.533267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.533335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.533351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.533359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.533368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.533383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.543277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.543335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.543350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.543357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.543368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.543384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.553347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.553433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.553448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.553456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.553462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.553477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.563311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.563375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.563390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.563397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.563403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.563419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.573399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.573453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.573467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.573475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.573481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.573496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.583383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.583460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.583475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.583482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.583488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.583503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.593387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.593445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.593461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.593469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.593476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.593492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.603499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.603556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.603571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.603579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.603585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.603600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.613474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.613533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.613547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.613555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.613562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.613577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.623554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.623625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.623639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.623646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.623652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.623668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.633506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.633560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.633579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.633587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.633593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.633608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.643521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.643578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.643592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.643600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.614 [2024-12-11 15:08:26.643606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.614 [2024-12-11 15:08:26.643621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.614 qpair failed and we were unable to recover it. 00:27:33.614 [2024-12-11 15:08:26.653645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.614 [2024-12-11 15:08:26.653704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.614 [2024-12-11 15:08:26.653722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.614 [2024-12-11 15:08:26.653730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.615 [2024-12-11 15:08:26.653737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.615 [2024-12-11 15:08:26.653755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.615 qpair failed and we were unable to recover it. 00:27:33.874 [2024-12-11 15:08:26.663656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.874 [2024-12-11 15:08:26.663735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.874 [2024-12-11 15:08:26.663755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.874 [2024-12-11 15:08:26.663763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.874 [2024-12-11 15:08:26.663769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.874 [2024-12-11 15:08:26.663787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.874 qpair failed and we were unable to recover it. 00:27:33.874 [2024-12-11 15:08:26.673689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.874 [2024-12-11 15:08:26.673748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.874 [2024-12-11 15:08:26.673765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.874 [2024-12-11 15:08:26.673772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.874 [2024-12-11 15:08:26.673785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.874 [2024-12-11 15:08:26.673802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.874 qpair failed and we were unable to recover it. 00:27:33.874 [2024-12-11 15:08:26.683722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.874 [2024-12-11 15:08:26.683814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.874 [2024-12-11 15:08:26.683830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.874 [2024-12-11 15:08:26.683838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.874 [2024-12-11 15:08:26.683844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.874 [2024-12-11 15:08:26.683860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.874 qpair failed and we were unable to recover it. 00:27:33.874 [2024-12-11 15:08:26.693658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.874 [2024-12-11 15:08:26.693717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.874 [2024-12-11 15:08:26.693732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.874 [2024-12-11 15:08:26.693740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.693747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.693762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.703785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.703866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.703881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.703889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.703895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.703911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.713790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.713848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.713863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.713870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.713877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.713892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.723767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.723821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.723835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.723842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.723849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.723864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.733864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.733919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.733935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.733942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.733948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.733963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.743907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.743966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.743981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.743988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.743995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.744010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.753866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.753922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.753936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.753943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.753950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.753965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.763928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.763996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.764013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.764020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.764026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.764041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.773952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.774033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.774048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.774055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.774061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.774077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.783940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.783996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.784012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.784019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.784025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.784040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.793980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.794035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.794052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.794059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.794066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.794081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.804041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.804098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.804112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.804120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.804129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.804144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.814097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.814151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.814170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.814177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.814184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.814198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.875 [2024-12-11 15:08:26.824144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.875 [2024-12-11 15:08:26.824206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.875 [2024-12-11 15:08:26.824221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.875 [2024-12-11 15:08:26.824228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.875 [2024-12-11 15:08:26.824234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.875 [2024-12-11 15:08:26.824249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.875 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.834146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.834208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.834224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.834231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.834237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.834252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.844192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.844250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.844264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.844272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.844278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.844294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.854273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.854375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.854389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.854396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.854403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.854418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.864259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.864317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.864332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.864339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.864345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.864360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.874289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.874359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.874373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.874380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.874386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.874402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.884301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.884355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.884369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.884376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.884383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.884398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.894334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.894387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.894404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.894413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.894420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.894435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.904379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.904442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.904456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.904463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.904470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.904484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:33.876 [2024-12-11 15:08:26.914412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:33.876 [2024-12-11 15:08:26.914469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:33.876 [2024-12-11 15:08:26.914483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:33.876 [2024-12-11 15:08:26.914490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:33.876 [2024-12-11 15:08:26.914497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:33.876 [2024-12-11 15:08:26.914512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:33.876 qpair failed and we were unable to recover it. 00:27:34.135 [2024-12-11 15:08:26.924417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.924474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.924494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.924506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.924515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.924538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.934443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.934503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.934521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.934529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.934541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.934559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.944490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.944546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.944562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.944569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.944576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.944592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.954541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.954613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.954628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.954636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.954642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.954658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.964562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.964620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.964635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.964642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.964648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.964663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.974552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.974608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.974623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.974631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.974637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.974652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.984597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.984672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.984686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.984694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.984700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.984715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:26.994625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:26.994702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:26.994716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:26.994724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:26.994730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:26.994745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.004620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.004689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:27.004704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:27.004711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:27.004717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:27.004732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.014668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.014727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:27.014742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:27.014750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:27.014756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:27.014772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.024718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.024779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:27.024798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:27.024806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:27.024812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:27.024829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.034783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.034850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:27.034866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:27.034873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:27.034880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:27.034895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.044777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.044840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.136 [2024-12-11 15:08:27.044855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.136 [2024-12-11 15:08:27.044862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.136 [2024-12-11 15:08:27.044869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.136 [2024-12-11 15:08:27.044884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.136 qpair failed and we were unable to recover it. 00:27:34.136 [2024-12-11 15:08:27.054794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.136 [2024-12-11 15:08:27.054845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.054859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.054866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.054873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.054888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.064840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.064912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.064927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.064934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.064944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.064959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.074850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.074908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.074922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.074929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.074935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.074950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.084879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.084935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.084950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.084957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.084963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.084978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.094911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.094967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.094982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.094989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.094995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.095010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.104948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.105007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.105021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.105029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.105037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.105051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.114971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.115028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.115042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.115049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.115056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.115070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.124997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.125054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.125069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.125077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.125083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.125099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.135020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.135083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.135098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.135105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.135112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.135126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.145063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.145121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.145135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.145143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.145149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.145170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.155152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.155254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.155272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.155279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.155285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.155300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.165160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.165221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.165235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.165243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.165249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.165264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.137 [2024-12-11 15:08:27.175180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.137 [2024-12-11 15:08:27.175238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.137 [2024-12-11 15:08:27.175252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.137 [2024-12-11 15:08:27.175259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.137 [2024-12-11 15:08:27.175266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x14ccbe0 00:27:34.137 [2024-12-11 15:08:27.175281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:27:34.137 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.185185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.397 [2024-12-11 15:08:27.185292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.397 [2024-12-11 15:08:27.185347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.397 [2024-12-11 15:08:27.185372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.397 [2024-12-11 15:08:27.185393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ce0000b90 00:27:34.397 [2024-12-11 15:08:27.185446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.195217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.397 [2024-12-11 15:08:27.195291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.397 [2024-12-11 15:08:27.195324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.397 [2024-12-11 15:08:27.195344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.397 [2024-12-11 15:08:27.195359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ce0000b90 00:27:34.397 [2024-12-11 15:08:27.195393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.205239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.397 [2024-12-11 15:08:27.205348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.397 [2024-12-11 15:08:27.205403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.397 [2024-12-11 15:08:27.205427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.397 [2024-12-11 15:08:27.205448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cdc000b90 00:27:34.397 [2024-12-11 15:08:27.205499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.215250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.397 [2024-12-11 15:08:27.215321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.397 [2024-12-11 15:08:27.215347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.397 [2024-12-11 15:08:27.215363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.397 [2024-12-11 15:08:27.215375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9cdc000b90 00:27:34.397 [2024-12-11 15:08:27.215406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.215506] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:27:34.397 A controller has encountered a failure and is being reset. 00:27:34.397 [2024-12-11 15:08:27.225298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.397 [2024-12-11 15:08:27.225419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.397 [2024-12-11 15:08:27.225475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.397 [2024-12-11 15:08:27.225500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.397 [2024-12-11 15:08:27.225522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ce8000b90 00:27:34.397 [2024-12-11 15:08:27.225575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.397 qpair failed and we were unable to recover it. 00:27:34.397 [2024-12-11 15:08:27.235289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:34.398 [2024-12-11 15:08:27.235385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:34.398 [2024-12-11 15:08:27.235412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:34.398 [2024-12-11 15:08:27.235426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:34.398 [2024-12-11 15:08:27.235445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9ce8000b90 00:27:34.398 [2024-12-11 15:08:27.235476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:34.398 qpair failed and we were unable to recover it. 00:27:34.398 Controller properly reset. 00:27:34.398 Initializing NVMe Controllers 00:27:34.398 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:34.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:34.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:34.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:34.398 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:34.398 Initialization complete. Launching workers. 00:27:34.398 Starting thread on core 1 00:27:34.398 Starting thread on core 2 00:27:34.398 Starting thread on core 3 00:27:34.398 Starting thread on core 0 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:34.398 00:27:34.398 real 0m10.675s 00:27:34.398 user 0m19.589s 00:27:34.398 sys 0m4.755s 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:34.398 ************************************ 00:27:34.398 END TEST nvmf_target_disconnect_tc2 00:27:34.398 ************************************ 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:34.398 rmmod nvme_tcp 00:27:34.398 rmmod nvme_fabrics 00:27:34.398 rmmod nvme_keyring 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3272227 ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3272227 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3272227 ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3272227 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272227 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272227' 00:27:34.398 killing process with pid 3272227 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3272227 00:27:34.398 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3272227 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:34.657 15:08:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.194 00:27:37.194 real 0m19.558s 00:27:37.194 user 0m46.779s 00:27:37.194 sys 0m9.747s 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:37.194 ************************************ 00:27:37.194 END TEST nvmf_target_disconnect 00:27:37.194 ************************************ 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:37.194 00:27:37.194 real 5m50.783s 00:27:37.194 user 10m31.039s 00:27:37.194 sys 1m58.439s 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.194 15:08:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.194 ************************************ 00:27:37.194 END TEST nvmf_host 00:27:37.194 ************************************ 00:27:37.194 15:08:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:37.194 15:08:29 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:27:37.194 15:08:29 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:37.194 15:08:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.194 15:08:29 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.194 15:08:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:37.194 ************************************ 00:27:37.194 START TEST nvmf_target_core_interrupt_mode 00:27:37.194 ************************************ 00:27:37.194 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:37.194 * Looking for test storage... 00:27:37.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.195 --rc genhtml_branch_coverage=1 00:27:37.195 --rc genhtml_function_coverage=1 00:27:37.195 --rc genhtml_legend=1 00:27:37.195 --rc geninfo_all_blocks=1 00:27:37.195 --rc geninfo_unexecuted_blocks=1 00:27:37.195 00:27:37.195 ' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.195 --rc genhtml_branch_coverage=1 00:27:37.195 --rc genhtml_function_coverage=1 00:27:37.195 --rc genhtml_legend=1 00:27:37.195 --rc geninfo_all_blocks=1 00:27:37.195 --rc geninfo_unexecuted_blocks=1 00:27:37.195 00:27:37.195 ' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.195 --rc genhtml_branch_coverage=1 00:27:37.195 --rc genhtml_function_coverage=1 00:27:37.195 --rc genhtml_legend=1 00:27:37.195 --rc geninfo_all_blocks=1 00:27:37.195 --rc geninfo_unexecuted_blocks=1 00:27:37.195 00:27:37.195 ' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.195 --rc genhtml_branch_coverage=1 00:27:37.195 --rc genhtml_function_coverage=1 00:27:37.195 --rc genhtml_legend=1 00:27:37.195 --rc geninfo_all_blocks=1 00:27:37.195 --rc geninfo_unexecuted_blocks=1 00:27:37.195 00:27:37.195 ' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:37.195 15:08:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:37.195 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.195 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:37.195 ************************************ 00:27:37.196 START TEST nvmf_abort 00:27:37.196 ************************************ 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:37.196 * Looking for test storage... 00:27:37.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.196 --rc genhtml_branch_coverage=1 00:27:37.196 --rc genhtml_function_coverage=1 00:27:37.196 --rc genhtml_legend=1 00:27:37.196 --rc geninfo_all_blocks=1 00:27:37.196 --rc geninfo_unexecuted_blocks=1 00:27:37.196 00:27:37.196 ' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.196 --rc genhtml_branch_coverage=1 00:27:37.196 --rc genhtml_function_coverage=1 00:27:37.196 --rc genhtml_legend=1 00:27:37.196 --rc geninfo_all_blocks=1 00:27:37.196 --rc geninfo_unexecuted_blocks=1 00:27:37.196 00:27:37.196 ' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.196 --rc genhtml_branch_coverage=1 00:27:37.196 --rc genhtml_function_coverage=1 00:27:37.196 --rc genhtml_legend=1 00:27:37.196 --rc geninfo_all_blocks=1 00:27:37.196 --rc geninfo_unexecuted_blocks=1 00:27:37.196 00:27:37.196 ' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.196 --rc genhtml_branch_coverage=1 00:27:37.196 --rc genhtml_function_coverage=1 00:27:37.196 --rc genhtml_legend=1 00:27:37.196 --rc geninfo_all_blocks=1 00:27:37.196 --rc geninfo_unexecuted_blocks=1 00:27:37.196 00:27:37.196 ' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.196 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.197 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:37.197 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:37.197 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.197 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.197 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.456 15:08:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:44.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:44.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:44.026 Found net devices under 0000:86:00.0: cvl_0_0 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:44.026 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:44.027 Found net devices under 0000:86:00.1: cvl_0_1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:44.027 15:08:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:44.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:44.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:27:44.027 00:27:44.027 --- 10.0.0.2 ping statistics --- 00:27:44.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.027 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:44.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:44.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:27:44.027 00:27:44.027 --- 10.0.0.1 ping statistics --- 00:27:44.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:44.027 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3276795 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3276795 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3276795 ']' 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:44.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 [2024-12-11 15:08:36.263531] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:44.027 [2024-12-11 15:08:36.264409] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:44.027 [2024-12-11 15:08:36.264440] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:44.027 [2024-12-11 15:08:36.341664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:44.027 [2024-12-11 15:08:36.379978] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:44.027 [2024-12-11 15:08:36.380011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:44.027 [2024-12-11 15:08:36.380018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:44.027 [2024-12-11 15:08:36.380024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:44.027 [2024-12-11 15:08:36.380029] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:44.027 [2024-12-11 15:08:36.381445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:44.027 [2024-12-11 15:08:36.381551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.027 [2024-12-11 15:08:36.381552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:44.027 [2024-12-11 15:08:36.449660] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:44.027 [2024-12-11 15:08:36.450507] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:44.027 [2024-12-11 15:08:36.450910] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:44.027 [2024-12-11 15:08:36.451020] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 [2024-12-11 15:08:36.526293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 Malloc0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 Delay0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.027 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 [2024-12-11 15:08:36.614239] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.028 15:08:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:44.028 [2024-12-11 15:08:36.739972] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:45.926 Initializing NVMe Controllers 00:27:45.926 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:45.926 controller IO queue size 128 less than required 00:27:45.926 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:45.926 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:45.926 Initialization complete. Launching workers. 00:27:45.926 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36620 00:27:45.926 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36677, failed to submit 66 00:27:45.926 success 36620, unsuccessful 57, failed 0 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:45.926 rmmod nvme_tcp 00:27:45.926 rmmod nvme_fabrics 00:27:45.926 rmmod nvme_keyring 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3276795 ']' 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3276795 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3276795 ']' 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3276795 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276795 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276795' 00:27:45.926 killing process with pid 3276795 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3276795 00:27:45.926 15:08:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3276795 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:46.186 15:08:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:48.723 00:27:48.723 real 0m11.109s 00:27:48.723 user 0m10.194s 00:27:48.723 sys 0m5.598s 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:48.723 ************************************ 00:27:48.723 END TEST nvmf_abort 00:27:48.723 ************************************ 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:48.723 ************************************ 00:27:48.723 START TEST nvmf_ns_hotplug_stress 00:27:48.723 ************************************ 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:48.723 * Looking for test storage... 00:27:48.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:48.723 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.724 --rc genhtml_branch_coverage=1 00:27:48.724 --rc genhtml_function_coverage=1 00:27:48.724 --rc genhtml_legend=1 00:27:48.724 --rc geninfo_all_blocks=1 00:27:48.724 --rc geninfo_unexecuted_blocks=1 00:27:48.724 00:27:48.724 ' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.724 --rc genhtml_branch_coverage=1 00:27:48.724 --rc genhtml_function_coverage=1 00:27:48.724 --rc genhtml_legend=1 00:27:48.724 --rc geninfo_all_blocks=1 00:27:48.724 --rc geninfo_unexecuted_blocks=1 00:27:48.724 00:27:48.724 ' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.724 --rc genhtml_branch_coverage=1 00:27:48.724 --rc genhtml_function_coverage=1 00:27:48.724 --rc genhtml_legend=1 00:27:48.724 --rc geninfo_all_blocks=1 00:27:48.724 --rc geninfo_unexecuted_blocks=1 00:27:48.724 00:27:48.724 ' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:48.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:48.724 --rc genhtml_branch_coverage=1 00:27:48.724 --rc genhtml_function_coverage=1 00:27:48.724 --rc genhtml_legend=1 00:27:48.724 --rc geninfo_all_blocks=1 00:27:48.724 --rc geninfo_unexecuted_blocks=1 00:27:48.724 00:27:48.724 ' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.724 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:27:48.725 15:08:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:54.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:54.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:54.001 Found net devices under 0000:86:00.0: cvl_0_0 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:54.001 Found net devices under 0000:86:00.1: cvl_0_1 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.001 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:54.002 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:54.261 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:54.261 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:54.261 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:54.261 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:54.261 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:54.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:54.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:27:54.520 00:27:54.520 --- 10.0.0.2 ping statistics --- 00:27:54.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.520 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:54.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:54.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:27:54.520 00:27:54.520 --- 10.0.0.1 ping statistics --- 00:27:54.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:54.520 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3280789 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3280789 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3280789 ']' 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:54.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:54.520 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:54.520 [2024-12-11 15:08:47.438804] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:54.520 [2024-12-11 15:08:47.439793] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:27:54.520 [2024-12-11 15:08:47.439831] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:54.520 [2024-12-11 15:08:47.521211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:54.520 [2024-12-11 15:08:47.562001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:54.520 [2024-12-11 15:08:47.562037] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:54.520 [2024-12-11 15:08:47.562044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:54.520 [2024-12-11 15:08:47.562051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:54.520 [2024-12-11 15:08:47.562056] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:54.520 [2024-12-11 15:08:47.563551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:54.520 [2024-12-11 15:08:47.563662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.520 [2024-12-11 15:08:47.563663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:54.779 [2024-12-11 15:08:47.632386] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:54.779 [2024-12-11 15:08:47.633249] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:54.779 [2024-12-11 15:08:47.633645] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:54.779 [2024-12-11 15:08:47.633763] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:54.779 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:55.039 [2024-12-11 15:08:47.868537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:55.039 15:08:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:55.298 15:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.298 [2024-12-11 15:08:48.272830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.298 15:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.557 15:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:55.817 Malloc0 00:27:55.817 15:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:56.074 Delay0 00:27:56.074 15:08:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.074 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:56.331 NULL1 00:27:56.331 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:56.588 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3281054 00:27:56.588 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:56.588 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:27:56.588 15:08:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.958 Read completed with error (sct=0, sc=11) 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 15:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:57.958 15:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:57.958 15:08:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:58.215 true 00:27:58.215 15:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:27:58.215 15:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.145 15:08:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.145 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:59.145 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:59.402 true 00:27:59.402 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:27:59.402 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.659 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.659 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:59.659 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:59.916 true 00:27:59.916 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:27:59.916 15:08:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.286 15:08:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.286 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:01.286 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:01.286 true 00:28:01.286 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:01.286 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.542 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.799 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:01.799 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:02.056 true 00:28:02.056 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:02.056 15:08:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:02.985 15:08:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.242 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:03.242 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:03.499 true 00:28:03.499 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:03.499 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.499 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.756 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:03.756 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:04.013 true 00:28:04.013 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:04.013 15:08:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:04.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:04.943 15:08:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:04.943 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.200 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:05.200 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:05.457 true 00:28:05.457 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:05.457 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.715 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.971 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:05.971 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:05.971 true 00:28:05.971 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:05.971 15:08:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 15:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:07.340 15:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:07.340 15:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:07.598 true 00:28:07.598 15:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:07.598 15:09:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.529 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.529 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:08.529 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:08.785 true 00:28:08.785 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:08.785 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.043 15:09:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.300 15:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:09.300 15:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:09.300 true 00:28:09.300 15:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:09.300 15:09:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 15:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:10.670 15:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:10.670 15:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:10.927 true 00:28:10.927 15:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:10.927 15:09:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:11.857 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:11.857 15:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:11.857 15:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:11.857 15:09:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:12.114 true 00:28:12.114 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:12.114 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.370 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.626 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:12.626 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:12.882 true 00:28:12.882 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:12.882 15:09:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.869 15:09:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:14.126 15:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:14.126 15:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:14.383 true 00:28:14.383 15:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:14.383 15:09:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.312 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.312 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:15.312 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:15.569 true 00:28:15.569 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:15.569 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.826 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.083 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:16.083 15:09:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:16.083 true 00:28:16.339 15:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:16.339 15:09:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.270 15:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:17.527 15:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:17.527 15:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:17.784 true 00:28:17.784 15:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:17.784 15:09:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.715 15:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.715 15:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:18.715 15:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:18.972 true 00:28:18.972 15:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:18.972 15:09:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.972 15:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.229 15:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:19.229 15:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:19.486 true 00:28:19.486 15:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:19.486 15:09:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 15:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:20.855 15:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:20.855 15:09:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:21.113 true 00:28:21.113 15:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:21.113 15:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.043 15:09:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.043 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:22.043 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:22.298 true 00:28:22.298 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:22.298 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.555 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.811 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:22.811 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:22.811 true 00:28:23.068 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:23.068 15:09:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.999 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:23.999 15:09:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.255 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:24.255 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:24.255 true 00:28:24.255 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:24.255 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.511 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.768 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:24.768 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:25.024 true 00:28:25.024 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:25.024 15:09:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:25.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 15:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:26.211 15:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:26.211 15:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:26.468 true 00:28:26.468 15:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:26.468 15:09:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.398 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:27.398 Initializing NVMe Controllers 00:28:27.398 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:27.399 Controller IO queue size 128, less than required. 00:28:27.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.399 Controller IO queue size 128, less than required. 00:28:27.399 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:27.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:27.399 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:27.399 Initialization complete. Launching workers. 00:28:27.399 ======================================================== 00:28:27.399 Latency(us) 00:28:27.399 Device Information : IOPS MiB/s Average min max 00:28:27.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1784.03 0.87 48998.07 2306.22 1046574.89 00:28:27.399 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17329.09 8.46 7385.97 2081.43 435335.33 00:28:27.399 ======================================================== 00:28:27.399 Total : 19113.12 9.33 11270.07 2081.43 1046574.89 00:28:27.399 00:28:27.656 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:27.656 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:27.656 true 00:28:27.656 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3281054 00:28:27.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3281054) - No such process 00:28:27.656 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3281054 00:28:27.656 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.913 15:09:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.170 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:28.170 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:28.170 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:28.170 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.170 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:28.428 null0 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:28.428 null1 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.428 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:28.685 null2 00:28:28.685 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.685 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.685 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:28.943 null3 00:28:28.943 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:28.943 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:28.943 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:28.943 null4 00:28:29.201 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.201 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.201 15:09:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:29.201 null5 00:28:29.201 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.201 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.201 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:29.458 null6 00:28:29.458 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.458 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.458 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:29.716 null7 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3286906 3286907 3286909 3286912 3286913 3286915 3286917 3286918 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.717 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.975 15:09:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.975 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.232 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.232 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:30.233 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:30.490 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.491 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:30.748 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.006 15:09:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.006 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.006 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.006 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.006 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.006 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.264 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:31.522 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:31.779 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.779 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.779 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:31.779 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:31.780 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.037 15:09:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.295 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:32.551 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:32.808 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.066 15:09:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.324 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:33.581 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:33.839 rmmod nvme_tcp 00:28:33.839 rmmod nvme_fabrics 00:28:33.839 rmmod nvme_keyring 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3280789 ']' 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3280789 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3280789 ']' 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3280789 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:33.839 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3280789 00:28:34.099 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.099 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.099 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3280789' 00:28:34.099 killing process with pid 3280789 00:28:34.099 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3280789 00:28:34.099 15:09:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3280789 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.099 15:09:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.634 00:28:36.634 real 0m47.952s 00:28:36.634 user 3m0.261s 00:28:36.634 sys 0m19.756s 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:36.634 ************************************ 00:28:36.634 END TEST nvmf_ns_hotplug_stress 00:28:36.634 ************************************ 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:36.634 ************************************ 00:28:36.634 START TEST nvmf_delete_subsystem 00:28:36.634 ************************************ 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:36.634 * Looking for test storage... 00:28:36.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:36.634 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.635 --rc genhtml_branch_coverage=1 00:28:36.635 --rc genhtml_function_coverage=1 00:28:36.635 --rc genhtml_legend=1 00:28:36.635 --rc geninfo_all_blocks=1 00:28:36.635 --rc geninfo_unexecuted_blocks=1 00:28:36.635 00:28:36.635 ' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.635 --rc genhtml_branch_coverage=1 00:28:36.635 --rc genhtml_function_coverage=1 00:28:36.635 --rc genhtml_legend=1 00:28:36.635 --rc geninfo_all_blocks=1 00:28:36.635 --rc geninfo_unexecuted_blocks=1 00:28:36.635 00:28:36.635 ' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.635 --rc genhtml_branch_coverage=1 00:28:36.635 --rc genhtml_function_coverage=1 00:28:36.635 --rc genhtml_legend=1 00:28:36.635 --rc geninfo_all_blocks=1 00:28:36.635 --rc geninfo_unexecuted_blocks=1 00:28:36.635 00:28:36.635 ' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.635 --rc genhtml_branch_coverage=1 00:28:36.635 --rc genhtml_function_coverage=1 00:28:36.635 --rc genhtml_legend=1 00:28:36.635 --rc geninfo_all_blocks=1 00:28:36.635 --rc geninfo_unexecuted_blocks=1 00:28:36.635 00:28:36.635 ' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:36.635 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:28:36.636 15:09:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:43.206 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:43.206 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:43.206 Found net devices under 0000:86:00.0: cvl_0_0 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.206 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:43.207 Found net devices under 0000:86:00.1: cvl_0_1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.207 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.207 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:28:43.207 00:28:43.207 --- 10.0.0.2 ping statistics --- 00:28:43.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.207 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:28:43.207 00:28:43.207 --- 10.0.0.1 ping statistics --- 00:28:43.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.207 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3291280 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3291280 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3291280 ']' 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 [2024-12-11 15:09:35.431760] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.207 [2024-12-11 15:09:35.432780] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:28:43.207 [2024-12-11 15:09:35.432821] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.207 [2024-12-11 15:09:35.512179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:43.207 [2024-12-11 15:09:35.553397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.207 [2024-12-11 15:09:35.553432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.207 [2024-12-11 15:09:35.553439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.207 [2024-12-11 15:09:35.553446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.207 [2024-12-11 15:09:35.553451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.207 [2024-12-11 15:09:35.554548] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.207 [2024-12-11 15:09:35.554551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.207 [2024-12-11 15:09:35.623512] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.207 [2024-12-11 15:09:35.624111] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.207 [2024-12-11 15:09:35.624336] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 [2024-12-11 15:09:35.691353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.207 [2024-12-11 15:09:35.719636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.207 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.208 NULL1 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.208 Delay0 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3291321 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:43.208 15:09:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:43.208 [2024-12-11 15:09:35.832435] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:45.107 15:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:45.107 15:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.107 15:09:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 starting I/O failed: -6 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 Read completed with error (sct=0, sc=8) 00:28:45.107 [2024-12-11 15:09:37.916918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c4a0 is same with the state(6) to be set 00:28:45.107 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 starting I/O failed: -6 00:28:45.108 [2024-12-11 15:09:37.919422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8e80000c80 is same with the state(6) to be set 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:45.108 Write completed with error (sct=0, sc=8) 00:28:45.108 Read completed with error (sct=0, sc=8) 00:28:46.042 [2024-12-11 15:09:38.887035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248d9b0 is same with the state(6) to be set 00:28:46.042 Write completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Write completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Write completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Write completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.042 [2024-12-11 15:09:38.920040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c2c0 is same with the state(6) to be set 00:28:46.042 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 [2024-12-11 15:09:38.920756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c860 is same with the state(6) to be set 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 [2024-12-11 15:09:38.922111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8e8000d6c0 is same with the state(6) to be set 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Read completed with error (sct=0, sc=8) 00:28:46.043 Write completed with error (sct=0, sc=8) 00:28:46.043 [2024-12-11 15:09:38.923064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8e8000d060 is same with the state(6) to be set 00:28:46.043 Initializing NVMe Controllers 00:28:46.043 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.043 Controller IO queue size 128, less than required. 00:28:46.043 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:46.043 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:46.043 Initialization complete. Launching workers. 00:28:46.043 ======================================================== 00:28:46.043 Latency(us) 00:28:46.043 Device Information : IOPS MiB/s Average min max 00:28:46.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.59 0.08 895302.38 297.57 1008534.10 00:28:46.043 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.11 0.08 902732.35 266.23 1011326.51 00:28:46.043 ======================================================== 00:28:46.043 Total : 335.70 0.16 898978.84 266.23 1011326.51 00:28:46.043 00:28:46.043 [2024-12-11 15:09:38.923674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248d9b0 (9): Bad file descriptor 00:28:46.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:46.043 15:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.043 15:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:46.043 15:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3291321 00:28:46.043 15:09:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3291321 00:28:46.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3291321) - No such process 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3291321 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3291321 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3291321 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 [2024-12-11 15:09:39.455637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3291985 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:46.610 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:46.610 [2024-12-11 15:09:39.541690] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:47.174 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.174 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:47.174 15:09:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.738 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.738 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:47.738 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:47.995 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:47.995 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:47.995 15:09:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:48.558 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:48.558 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:48.558 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.122 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.122 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:49.122 15:09:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.686 15:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:49.686 15:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:49.686 15:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:49.943 Initializing NVMe Controllers 00:28:49.943 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.943 Controller IO queue size 128, less than required. 00:28:49.943 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:49.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:49.943 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:49.943 Initialization complete. Launching workers. 00:28:49.943 ======================================================== 00:28:49.943 Latency(us) 00:28:49.943 Device Information : IOPS MiB/s Average min max 00:28:49.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003560.06 1000126.30 1043371.45 00:28:49.943 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004090.03 1000171.82 1011157.94 00:28:49.943 ======================================================== 00:28:49.943 Total : 256.00 0.12 1003825.04 1000126.30 1043371.45 00:28:49.943 00:28:50.202 15:09:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3291985 00:28:50.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3291985) - No such process 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3291985 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:50.202 rmmod nvme_tcp 00:28:50.202 rmmod nvme_fabrics 00:28:50.202 rmmod nvme_keyring 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3291280 ']' 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3291280 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3291280 ']' 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3291280 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3291280 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3291280' 00:28:50.202 killing process with pid 3291280 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3291280 00:28:50.202 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3291280 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.462 15:09:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:52.366 00:28:52.366 real 0m16.105s 00:28:52.366 user 0m25.999s 00:28:52.366 sys 0m6.098s 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 ************************************ 00:28:52.366 END TEST nvmf_delete_subsystem 00:28:52.366 ************************************ 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:52.366 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:52.626 ************************************ 00:28:52.626 START TEST nvmf_host_management 00:28:52.626 ************************************ 00:28:52.626 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:52.626 * Looking for test storage... 00:28:52.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:52.626 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:52.626 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.627 --rc genhtml_branch_coverage=1 00:28:52.627 --rc genhtml_function_coverage=1 00:28:52.627 --rc genhtml_legend=1 00:28:52.627 --rc geninfo_all_blocks=1 00:28:52.627 --rc geninfo_unexecuted_blocks=1 00:28:52.627 00:28:52.627 ' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.627 --rc genhtml_branch_coverage=1 00:28:52.627 --rc genhtml_function_coverage=1 00:28:52.627 --rc genhtml_legend=1 00:28:52.627 --rc geninfo_all_blocks=1 00:28:52.627 --rc geninfo_unexecuted_blocks=1 00:28:52.627 00:28:52.627 ' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.627 --rc genhtml_branch_coverage=1 00:28:52.627 --rc genhtml_function_coverage=1 00:28:52.627 --rc genhtml_legend=1 00:28:52.627 --rc geninfo_all_blocks=1 00:28:52.627 --rc geninfo_unexecuted_blocks=1 00:28:52.627 00:28:52.627 ' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:52.627 --rc genhtml_branch_coverage=1 00:28:52.627 --rc genhtml_function_coverage=1 00:28:52.627 --rc genhtml_legend=1 00:28:52.627 --rc geninfo_all_blocks=1 00:28:52.627 --rc geninfo_unexecuted_blocks=1 00:28:52.627 00:28:52.627 ' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:52.627 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:28:52.628 15:09:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.201 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:59.202 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:59.202 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:59.202 Found net devices under 0000:86:00.0: cvl_0_0 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:59.202 Found net devices under 0000:86:00.1: cvl_0_1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:59.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:28:59.202 00:28:59.202 --- 10.0.0.2 ping statistics --- 00:28:59.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.202 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:59.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:28:59.202 00:28:59.202 --- 10.0.0.1 ping statistics --- 00:28:59.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.202 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:59.202 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3295977 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3295977 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3295977 ']' 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 [2024-12-11 15:09:51.619642] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:59.203 [2024-12-11 15:09:51.620565] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:28:59.203 [2024-12-11 15:09:51.620598] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.203 [2024-12-11 15:09:51.698410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:59.203 [2024-12-11 15:09:51.742665] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.203 [2024-12-11 15:09:51.742697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.203 [2024-12-11 15:09:51.742705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.203 [2024-12-11 15:09:51.742711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.203 [2024-12-11 15:09:51.742716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.203 [2024-12-11 15:09:51.744206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.203 [2024-12-11 15:09:51.744312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:59.203 [2024-12-11 15:09:51.744343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.203 [2024-12-11 15:09:51.744344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:59.203 [2024-12-11 15:09:51.813146] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:59.203 [2024-12-11 15:09:51.814187] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:59.203 [2024-12-11 15:09:51.814342] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:59.203 [2024-12-11 15:09:51.814663] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:59.203 [2024-12-11 15:09:51.814724] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 [2024-12-11 15:09:51.881255] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 Malloc0 00:28:59.203 [2024-12-11 15:09:51.969416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.203 15:09:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3296164 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3296164 /var/tmp/bdevperf.sock 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3296164 ']' 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.203 { 00:28:59.203 "params": { 00:28:59.203 "name": "Nvme$subsystem", 00:28:59.203 "trtype": "$TEST_TRANSPORT", 00:28:59.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.203 "adrfam": "ipv4", 00:28:59.203 "trsvcid": "$NVMF_PORT", 00:28:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.203 "hdgst": ${hdgst:-false}, 00:28:59.203 "ddgst": ${ddgst:-false} 00:28:59.203 }, 00:28:59.203 "method": "bdev_nvme_attach_controller" 00:28:59.203 } 00:28:59.203 EOF 00:28:59.203 )") 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:28:59.203 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.203 "params": { 00:28:59.203 "name": "Nvme0", 00:28:59.203 "trtype": "tcp", 00:28:59.203 "traddr": "10.0.0.2", 00:28:59.203 "adrfam": "ipv4", 00:28:59.203 "trsvcid": "4420", 00:28:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:59.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:59.203 "hdgst": false, 00:28:59.203 "ddgst": false 00:28:59.203 }, 00:28:59.203 "method": "bdev_nvme_attach_controller" 00:28:59.203 }' 00:28:59.203 [2024-12-11 15:09:52.062966] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:28:59.203 [2024-12-11 15:09:52.063019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296164 ] 00:28:59.203 [2024-12-11 15:09:52.140541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.203 [2024-12-11 15:09:52.181011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.461 Running I/O for 10 seconds... 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.461 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.718 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=87 00:28:59.718 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 87 -ge 100 ']' 00:28:59.718 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=654 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 654 -ge 100 ']' 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.976 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.976 [2024-12-11 15:09:52.821969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.976 [2024-12-11 15:09:52.822103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.976 [2024-12-11 15:09:52.822110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.977 [2024-12-11 15:09:52.822717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.977 [2024-12-11 15:09:52.822724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.978 [2024-12-11 15:09:52.822972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.822997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:59.978 [2024-12-11 15:09:52.823937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:59.978 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:59.978 00:28:59.978 Latency(us) 00:28:59.978 [2024-12-11T14:09:53.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.978 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.978 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:59.978 Verification LBA range: start 0x0 length 0x400 00:28:59.978 Nvme0n1 : 0.40 1902.18 118.89 158.52 0.00 30214.31 1602.78 27696.08 00:28:59.978 [2024-12-11T14:09:53.026Z] =================================================================================================================== 00:28:59.978 [2024-12-11T14:09:53.026Z] Total : 1902.18 118.89 158.52 0.00 30214.31 1602.78 27696.08 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.978 [2024-12-11 15:09:52.826354] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:59.978 [2024-12-11 15:09:52.826375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f591a0 (9): Bad file descriptor 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:59.978 [2024-12-11 15:09:52.827335] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:59.978 [2024-12-11 15:09:52.827409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:59.978 [2024-12-11 15:09:52.827431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:59.978 [2024-12-11 15:09:52.827446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:59.978 [2024-12-11 15:09:52.827454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:59.978 [2024-12-11 15:09:52.827462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:59.978 [2024-12-11 15:09:52.827468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1f591a0 00:28:59.978 [2024-12-11 15:09:52.827487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f591a0 (9): Bad file descriptor 00:28:59.978 [2024-12-11 15:09:52.827499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:59.978 [2024-12-11 15:09:52.827510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:59.978 [2024-12-11 15:09:52.827518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:59.978 [2024-12-11 15:09:52.827526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.978 15:09:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3296164 00:29:00.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3296164) - No such process 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:00.909 { 00:29:00.909 "params": { 00:29:00.909 "name": "Nvme$subsystem", 00:29:00.909 "trtype": "$TEST_TRANSPORT", 00:29:00.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.909 "adrfam": "ipv4", 00:29:00.909 "trsvcid": "$NVMF_PORT", 00:29:00.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.909 "hdgst": ${hdgst:-false}, 00:29:00.909 "ddgst": ${ddgst:-false} 00:29:00.909 }, 00:29:00.909 "method": "bdev_nvme_attach_controller" 00:29:00.909 } 00:29:00.909 EOF 00:29:00.909 )") 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:00.909 15:09:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:00.909 "params": { 00:29:00.909 "name": "Nvme0", 00:29:00.909 "trtype": "tcp", 00:29:00.909 "traddr": "10.0.0.2", 00:29:00.909 "adrfam": "ipv4", 00:29:00.909 "trsvcid": "4420", 00:29:00.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:00.909 "hdgst": false, 00:29:00.909 "ddgst": false 00:29:00.909 }, 00:29:00.909 "method": "bdev_nvme_attach_controller" 00:29:00.909 }' 00:29:00.909 [2024-12-11 15:09:53.891473] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:00.909 [2024-12-11 15:09:53.891524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296490 ] 00:29:01.166 [2024-12-11 15:09:53.968929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.166 [2024-12-11 15:09:54.007584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.166 Running I/O for 1 seconds... 00:29:02.535 1984.00 IOPS, 124.00 MiB/s 00:29:02.535 Latency(us) 00:29:02.535 [2024-12-11T14:09:55.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.535 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.535 Verification LBA range: start 0x0 length 0x400 00:29:02.535 Nvme0n1 : 1.02 2004.34 125.27 0.00 0.00 31427.04 4673.00 27924.03 00:29:02.535 [2024-12-11T14:09:55.583Z] =================================================================================================================== 00:29:02.535 [2024-12-11T14:09:55.583Z] Total : 2004.34 125.27 0.00 0.00 31427.04 4673.00 27924.03 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:02.535 rmmod nvme_tcp 00:29:02.535 rmmod nvme_fabrics 00:29:02.535 rmmod nvme_keyring 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3295977 ']' 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3295977 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3295977 ']' 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3295977 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3295977 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.535 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3295977' 00:29:02.535 killing process with pid 3295977 00:29:02.536 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3295977 00:29:02.536 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3295977 00:29:02.794 [2024-12-11 15:09:55.680396] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.794 15:09:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:04.785 00:29:04.785 real 0m12.354s 00:29:04.785 user 0m17.870s 00:29:04.785 sys 0m6.335s 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:04.785 ************************************ 00:29:04.785 END TEST nvmf_host_management 00:29:04.785 ************************************ 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.785 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:05.045 ************************************ 00:29:05.045 START TEST nvmf_lvol 00:29:05.045 ************************************ 00:29:05.045 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:05.045 * Looking for test storage... 00:29:05.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:05.045 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:05.045 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:05.045 15:09:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:05.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.045 --rc genhtml_branch_coverage=1 00:29:05.045 --rc genhtml_function_coverage=1 00:29:05.045 --rc genhtml_legend=1 00:29:05.045 --rc geninfo_all_blocks=1 00:29:05.045 --rc geninfo_unexecuted_blocks=1 00:29:05.045 00:29:05.045 ' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:05.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.045 --rc genhtml_branch_coverage=1 00:29:05.045 --rc genhtml_function_coverage=1 00:29:05.045 --rc genhtml_legend=1 00:29:05.045 --rc geninfo_all_blocks=1 00:29:05.045 --rc geninfo_unexecuted_blocks=1 00:29:05.045 00:29:05.045 ' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:05.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.045 --rc genhtml_branch_coverage=1 00:29:05.045 --rc genhtml_function_coverage=1 00:29:05.045 --rc genhtml_legend=1 00:29:05.045 --rc geninfo_all_blocks=1 00:29:05.045 --rc geninfo_unexecuted_blocks=1 00:29:05.045 00:29:05.045 ' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:05.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:05.045 --rc genhtml_branch_coverage=1 00:29:05.045 --rc genhtml_function_coverage=1 00:29:05.045 --rc genhtml_legend=1 00:29:05.045 --rc geninfo_all_blocks=1 00:29:05.045 --rc geninfo_unexecuted_blocks=1 00:29:05.045 00:29:05.045 ' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:05.045 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.046 15:09:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:11.618 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:11.618 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.618 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:11.618 Found net devices under 0000:86:00.0: cvl_0_0 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:11.619 Found net devices under 0000:86:00.1: cvl_0_1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:11.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:29:11.619 00:29:11.619 --- 10.0.0.2 ping statistics --- 00:29:11.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.619 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:11.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:29:11.619 00:29:11.619 --- 10.0.0.1 ping statistics --- 00:29:11.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.619 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3300244 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3300244 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3300244 ']' 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.619 15:10:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:11.619 [2024-12-11 15:10:03.988152] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:11.619 [2024-12-11 15:10:03.989184] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:11.619 [2024-12-11 15:10:03.989224] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.619 [2024-12-11 15:10:04.068004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:11.619 [2024-12-11 15:10:04.108395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.619 [2024-12-11 15:10:04.108432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.619 [2024-12-11 15:10:04.108438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.619 [2024-12-11 15:10:04.108448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.619 [2024-12-11 15:10:04.108453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.619 [2024-12-11 15:10:04.109694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.619 [2024-12-11 15:10:04.109803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.619 [2024-12-11 15:10:04.109804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.619 [2024-12-11 15:10:04.177759] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:11.619 [2024-12-11 15:10:04.178667] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:11.619 [2024-12-11 15:10:04.179059] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:11.619 [2024-12-11 15:10:04.179148] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:11.619 [2024-12-11 15:10:04.426656] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:11.619 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:11.879 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:11.879 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:11.879 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:11.879 15:10:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:12.137 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:12.399 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d142711e-9884-4f77-bbdb-0e17d5c53d80 00:29:12.399 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u d142711e-9884-4f77-bbdb-0e17d5c53d80 lvol 20 00:29:12.658 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a5c8d3b8-45e6-4d67-a3f8-189223133efd 00:29:12.658 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:12.916 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5c8d3b8-45e6-4d67-a3f8-189223133efd 00:29:12.916 15:10:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:13.175 [2024-12-11 15:10:06.058510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:13.175 15:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:13.433 15:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3300529 00:29:13.433 15:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:13.433 15:10:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:14.366 15:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot a5c8d3b8-45e6-4d67-a3f8-189223133efd MY_SNAPSHOT 00:29:14.623 15:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=081d453c-8418-4619-8c6a-ec8bcd951168 00:29:14.623 15:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize a5c8d3b8-45e6-4d67-a3f8-189223133efd 30 00:29:14.881 15:10:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 081d453c-8418-4619-8c6a-ec8bcd951168 MY_CLONE 00:29:15.139 15:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2243f13b-ea28-4f94-9577-b59db7f90d45 00:29:15.139 15:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 2243f13b-ea28-4f94-9577-b59db7f90d45 00:29:15.703 15:10:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3300529 00:29:23.801 Initializing NVMe Controllers 00:29:23.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:23.801 Controller IO queue size 128, less than required. 00:29:23.801 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:23.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:23.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:23.801 Initialization complete. Launching workers. 00:29:23.801 ======================================================== 00:29:23.801 Latency(us) 00:29:23.801 Device Information : IOPS MiB/s Average min max 00:29:23.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12213.60 47.71 10484.41 1508.85 69386.05 00:29:23.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12099.40 47.26 10584.20 3087.47 57695.03 00:29:23.801 ======================================================== 00:29:23.801 Total : 24313.00 94.97 10534.07 1508.85 69386.05 00:29:23.801 00:29:23.801 15:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.801 15:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete a5c8d3b8-45e6-4d67-a3f8-189223133efd 00:29:24.059 15:10:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d142711e-9884-4f77-bbdb-0e17d5c53d80 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:24.318 rmmod nvme_tcp 00:29:24.318 rmmod nvme_fabrics 00:29:24.318 rmmod nvme_keyring 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3300244 ']' 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3300244 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3300244 ']' 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3300244 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300244 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300244' 00:29:24.318 killing process with pid 3300244 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3300244 00:29:24.318 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3300244 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.577 15:10:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.482 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.741 00:29:26.741 real 0m21.680s 00:29:26.741 user 0m55.230s 00:29:26.741 sys 0m9.771s 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:26.741 ************************************ 00:29:26.741 END TEST nvmf_lvol 00:29:26.741 ************************************ 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:26.741 ************************************ 00:29:26.741 START TEST nvmf_lvs_grow 00:29:26.741 ************************************ 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:26.741 * Looking for test storage... 00:29:26.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:26.741 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.742 --rc genhtml_branch_coverage=1 00:29:26.742 --rc genhtml_function_coverage=1 00:29:26.742 --rc genhtml_legend=1 00:29:26.742 --rc geninfo_all_blocks=1 00:29:26.742 --rc geninfo_unexecuted_blocks=1 00:29:26.742 00:29:26.742 ' 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.742 --rc genhtml_branch_coverage=1 00:29:26.742 --rc genhtml_function_coverage=1 00:29:26.742 --rc genhtml_legend=1 00:29:26.742 --rc geninfo_all_blocks=1 00:29:26.742 --rc geninfo_unexecuted_blocks=1 00:29:26.742 00:29:26.742 ' 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.742 --rc genhtml_branch_coverage=1 00:29:26.742 --rc genhtml_function_coverage=1 00:29:26.742 --rc genhtml_legend=1 00:29:26.742 --rc geninfo_all_blocks=1 00:29:26.742 --rc geninfo_unexecuted_blocks=1 00:29:26.742 00:29:26.742 ' 00:29:26.742 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.742 --rc genhtml_branch_coverage=1 00:29:26.742 --rc genhtml_function_coverage=1 00:29:26.742 --rc genhtml_legend=1 00:29:26.742 --rc geninfo_all_blocks=1 00:29:26.742 --rc geninfo_unexecuted_blocks=1 00:29:26.742 00:29:26.742 ' 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:27.001 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:29:27.002 15:10:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.276 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.276 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.277 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.277 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.535 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.535 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.535 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.535 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.535 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:29:32.536 00:29:32.536 --- 10.0.0.2 ping statistics --- 00:29:32.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.536 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:29:32.536 00:29:32.536 --- 10.0.0.1 ping statistics --- 00:29:32.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.536 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.536 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3305872 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3305872 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3305872 ']' 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.795 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:32.795 [2024-12-11 15:10:25.635183] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.795 [2024-12-11 15:10:25.636067] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:32.795 [2024-12-11 15:10:25.636098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.795 [2024-12-11 15:10:25.714012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.795 [2024-12-11 15:10:25.754501] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.795 [2024-12-11 15:10:25.754537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.795 [2024-12-11 15:10:25.754544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.795 [2024-12-11 15:10:25.754551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.795 [2024-12-11 15:10:25.754556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.795 [2024-12-11 15:10:25.755109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.795 [2024-12-11 15:10:25.823594] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.795 [2024-12-11 15:10:25.823796] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:33.054 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.055 15:10:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:33.055 [2024-12-11 15:10:26.055778] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.055 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:33.055 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.055 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.055 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:33.314 ************************************ 00:29:33.314 START TEST lvs_grow_clean 00:29:33.314 ************************************ 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:33.314 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:33.573 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:33.573 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:33.573 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:33.832 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:33.832 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:33.832 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae lvol 150 00:29:34.091 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=dc9f6e02-32eb-4757-8135-cee05af80998 00:29:34.091 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:34.091 15:10:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:34.091 [2024-12-11 15:10:27.131492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:34.091 [2024-12-11 15:10:27.131615] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:34.091 true 00:29:34.350 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:34.350 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:34.350 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:34.350 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:34.608 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dc9f6e02-32eb-4757-8135-cee05af80998 00:29:34.866 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:34.866 [2024-12-11 15:10:27.891977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.866 15:10:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3306173 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3306173 /var/tmp/bdevperf.sock 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3306173 ']' 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.125 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:35.125 [2024-12-11 15:10:28.156707] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:35.125 [2024-12-11 15:10:28.156758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3306173 ] 00:29:35.384 [2024-12-11 15:10:28.231587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.384 [2024-12-11 15:10:28.273474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.384 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.384 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:35.384 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:35.642 Nvme0n1 00:29:35.642 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:35.901 [ 00:29:35.901 { 00:29:35.901 "name": "Nvme0n1", 00:29:35.901 "aliases": [ 00:29:35.901 "dc9f6e02-32eb-4757-8135-cee05af80998" 00:29:35.901 ], 00:29:35.901 "product_name": "NVMe disk", 00:29:35.901 "block_size": 4096, 00:29:35.901 "num_blocks": 38912, 00:29:35.901 "uuid": "dc9f6e02-32eb-4757-8135-cee05af80998", 00:29:35.901 "numa_id": 1, 00:29:35.901 "assigned_rate_limits": { 00:29:35.901 "rw_ios_per_sec": 0, 00:29:35.901 "rw_mbytes_per_sec": 0, 00:29:35.901 "r_mbytes_per_sec": 0, 00:29:35.901 "w_mbytes_per_sec": 0 00:29:35.901 }, 00:29:35.901 "claimed": false, 00:29:35.901 "zoned": false, 00:29:35.901 "supported_io_types": { 00:29:35.901 "read": true, 00:29:35.901 "write": true, 00:29:35.901 "unmap": true, 00:29:35.901 "flush": true, 00:29:35.901 "reset": true, 00:29:35.901 "nvme_admin": true, 00:29:35.901 "nvme_io": true, 00:29:35.901 "nvme_io_md": false, 00:29:35.901 "write_zeroes": true, 00:29:35.901 "zcopy": false, 00:29:35.901 "get_zone_info": false, 00:29:35.901 "zone_management": false, 00:29:35.901 "zone_append": false, 00:29:35.901 "compare": true, 00:29:35.901 "compare_and_write": true, 00:29:35.901 "abort": true, 00:29:35.901 "seek_hole": false, 00:29:35.901 "seek_data": false, 00:29:35.901 "copy": true, 00:29:35.901 "nvme_iov_md": false 00:29:35.901 }, 00:29:35.901 "memory_domains": [ 00:29:35.901 { 00:29:35.901 "dma_device_id": "system", 00:29:35.901 "dma_device_type": 1 00:29:35.901 } 00:29:35.901 ], 00:29:35.901 "driver_specific": { 00:29:35.901 "nvme": [ 00:29:35.901 { 00:29:35.901 "trid": { 00:29:35.901 "trtype": "TCP", 00:29:35.901 "adrfam": "IPv4", 00:29:35.901 "traddr": "10.0.0.2", 00:29:35.901 "trsvcid": "4420", 00:29:35.901 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:35.901 }, 00:29:35.901 "ctrlr_data": { 00:29:35.901 "cntlid": 1, 00:29:35.901 "vendor_id": "0x8086", 00:29:35.901 "model_number": "SPDK bdev Controller", 00:29:35.901 "serial_number": "SPDK0", 00:29:35.901 "firmware_revision": "25.01", 00:29:35.901 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.901 "oacs": { 00:29:35.901 "security": 0, 00:29:35.901 "format": 0, 00:29:35.901 "firmware": 0, 00:29:35.901 "ns_manage": 0 00:29:35.901 }, 00:29:35.901 "multi_ctrlr": true, 00:29:35.901 "ana_reporting": false 00:29:35.901 }, 00:29:35.901 "vs": { 00:29:35.901 "nvme_version": "1.3" 00:29:35.901 }, 00:29:35.901 "ns_data": { 00:29:35.901 "id": 1, 00:29:35.901 "can_share": true 00:29:35.901 } 00:29:35.901 } 00:29:35.901 ], 00:29:35.901 "mp_policy": "active_passive" 00:29:35.901 } 00:29:35.901 } 00:29:35.901 ] 00:29:35.901 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3306378 00:29:35.901 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:35.901 15:10:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:35.901 Running I/O for 10 seconds... 00:29:37.277 Latency(us) 00:29:37.277 [2024-12-11T14:10:30.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.277 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.277 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:37.277 [2024-12-11T14:10:30.325Z] =================================================================================================================== 00:29:37.277 [2024-12-11T14:10:30.325Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:29:37.277 00:29:37.844 15:10:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:38.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.103 Nvme0n1 : 2.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:38.103 [2024-12-11T14:10:31.151Z] =================================================================================================================== 00:29:38.103 [2024-12-11T14:10:31.151Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:29:38.103 00:29:38.103 true 00:29:38.103 15:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:38.103 15:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:38.361 15:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:38.361 15:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:38.361 15:10:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3306378 00:29:38.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.928 Nvme0n1 : 3.00 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:38.928 [2024-12-11T14:10:31.976Z] =================================================================================================================== 00:29:38.928 [2024-12-11T14:10:31.976Z] Total : 22648.33 88.47 0.00 0.00 0.00 0.00 0.00 00:29:38.928 00:29:39.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.863 Nvme0n1 : 4.00 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:39.863 [2024-12-11T14:10:32.911Z] =================================================================================================================== 00:29:39.863 [2024-12-11T14:10:32.911Z] Total : 22764.75 88.92 0.00 0.00 0.00 0.00 0.00 00:29:39.863 00:29:41.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.238 Nvme0n1 : 5.00 22834.60 89.20 0.00 0.00 0.00 0.00 0.00 00:29:41.238 [2024-12-11T14:10:34.286Z] =================================================================================================================== 00:29:41.238 [2024-12-11T14:10:34.286Z] Total : 22834.60 89.20 0.00 0.00 0.00 0.00 0.00 00:29:41.238 00:29:42.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.175 Nvme0n1 : 6.00 22881.17 89.38 0.00 0.00 0.00 0.00 0.00 00:29:42.175 [2024-12-11T14:10:35.223Z] =================================================================================================================== 00:29:42.175 [2024-12-11T14:10:35.223Z] Total : 22881.17 89.38 0.00 0.00 0.00 0.00 0.00 00:29:42.175 00:29:43.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.113 Nvme0n1 : 7.00 22914.43 89.51 0.00 0.00 0.00 0.00 0.00 00:29:43.113 [2024-12-11T14:10:36.161Z] =================================================================================================================== 00:29:43.113 [2024-12-11T14:10:36.161Z] Total : 22914.43 89.51 0.00 0.00 0.00 0.00 0.00 00:29:43.113 00:29:44.048 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.048 Nvme0n1 : 8.00 22947.38 89.64 0.00 0.00 0.00 0.00 0.00 00:29:44.048 [2024-12-11T14:10:37.096Z] =================================================================================================================== 00:29:44.048 [2024-12-11T14:10:37.096Z] Total : 22947.38 89.64 0.00 0.00 0.00 0.00 0.00 00:29:44.048 00:29:44.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:44.983 Nvme0n1 : 9.00 22964.22 89.70 0.00 0.00 0.00 0.00 0.00 00:29:44.983 [2024-12-11T14:10:38.031Z] =================================================================================================================== 00:29:44.983 [2024-12-11T14:10:38.031Z] Total : 22964.22 89.70 0.00 0.00 0.00 0.00 0.00 00:29:44.983 00:29:45.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.918 Nvme0n1 : 10.00 22966.50 89.71 0.00 0.00 0.00 0.00 0.00 00:29:45.918 [2024-12-11T14:10:38.966Z] =================================================================================================================== 00:29:45.918 [2024-12-11T14:10:38.966Z] Total : 22966.50 89.71 0.00 0.00 0.00 0.00 0.00 00:29:45.918 00:29:45.918 00:29:45.918 Latency(us) 00:29:45.918 [2024-12-11T14:10:38.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:45.918 Nvme0n1 : 10.00 22972.63 89.74 0.00 0.00 5568.66 3205.57 27126.21 00:29:45.918 [2024-12-11T14:10:38.966Z] =================================================================================================================== 00:29:45.918 [2024-12-11T14:10:38.966Z] Total : 22972.63 89.74 0.00 0.00 5568.66 3205.57 27126.21 00:29:45.918 { 00:29:45.918 "results": [ 00:29:45.918 { 00:29:45.918 "job": "Nvme0n1", 00:29:45.918 "core_mask": "0x2", 00:29:45.918 "workload": "randwrite", 00:29:45.918 "status": "finished", 00:29:45.918 "queue_depth": 128, 00:29:45.918 "io_size": 4096, 00:29:45.918 "runtime": 10.002903, 00:29:45.918 "iops": 22972.631045207578, 00:29:45.918 "mibps": 89.7368400203421, 00:29:45.918 "io_failed": 0, 00:29:45.918 "io_timeout": 0, 00:29:45.918 "avg_latency_us": 5568.660657926727, 00:29:45.918 "min_latency_us": 3205.5652173913045, 00:29:45.918 "max_latency_us": 27126.205217391303 00:29:45.918 } 00:29:45.918 ], 00:29:45.918 "core_count": 1 00:29:45.918 } 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3306173 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3306173 ']' 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3306173 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.918 15:10:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3306173 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3306173' 00:29:46.177 killing process with pid 3306173 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3306173 00:29:46.177 Received shutdown signal, test time was about 10.000000 seconds 00:29:46.177 00:29:46.177 Latency(us) 00:29:46.177 [2024-12-11T14:10:39.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.177 [2024-12-11T14:10:39.225Z] =================================================================================================================== 00:29:46.177 [2024-12-11T14:10:39.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3306173 00:29:46.177 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.435 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:46.694 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:46.694 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:46.953 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:46.953 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:46.953 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:46.953 [2024-12-11 15:10:39.947579] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:46.953 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.215 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:47.215 15:10:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:47.215 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:47.215 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:47.216 request: 00:29:47.216 { 00:29:47.216 "uuid": "08ce8572-6ebb-4446-ad6a-dbbcc4684bae", 00:29:47.216 "method": "bdev_lvol_get_lvstores", 00:29:47.216 "req_id": 1 00:29:47.216 } 00:29:47.216 Got JSON-RPC error response 00:29:47.216 response: 00:29:47.216 { 00:29:47.216 "code": -19, 00:29:47.216 "message": "No such device" 00:29:47.216 } 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:47.216 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:47.474 aio_bdev 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dc9f6e02-32eb-4757-8135-cee05af80998 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=dc9f6e02-32eb-4757-8135-cee05af80998 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:47.474 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:47.732 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b dc9f6e02-32eb-4757-8135-cee05af80998 -t 2000 00:29:47.991 [ 00:29:47.991 { 00:29:47.991 "name": "dc9f6e02-32eb-4757-8135-cee05af80998", 00:29:47.991 "aliases": [ 00:29:47.991 "lvs/lvol" 00:29:47.991 ], 00:29:47.991 "product_name": "Logical Volume", 00:29:47.991 "block_size": 4096, 00:29:47.991 "num_blocks": 38912, 00:29:47.991 "uuid": "dc9f6e02-32eb-4757-8135-cee05af80998", 00:29:47.991 "assigned_rate_limits": { 00:29:47.991 "rw_ios_per_sec": 0, 00:29:47.991 "rw_mbytes_per_sec": 0, 00:29:47.991 "r_mbytes_per_sec": 0, 00:29:47.991 "w_mbytes_per_sec": 0 00:29:47.991 }, 00:29:47.991 "claimed": false, 00:29:47.991 "zoned": false, 00:29:47.991 "supported_io_types": { 00:29:47.991 "read": true, 00:29:47.991 "write": true, 00:29:47.991 "unmap": true, 00:29:47.991 "flush": false, 00:29:47.991 "reset": true, 00:29:47.991 "nvme_admin": false, 00:29:47.991 "nvme_io": false, 00:29:47.991 "nvme_io_md": false, 00:29:47.991 "write_zeroes": true, 00:29:47.991 "zcopy": false, 00:29:47.991 "get_zone_info": false, 00:29:47.991 "zone_management": false, 00:29:47.991 "zone_append": false, 00:29:47.991 "compare": false, 00:29:47.991 "compare_and_write": false, 00:29:47.991 "abort": false, 00:29:47.991 "seek_hole": true, 00:29:47.991 "seek_data": true, 00:29:47.991 "copy": false, 00:29:47.991 "nvme_iov_md": false 00:29:47.991 }, 00:29:47.991 "driver_specific": { 00:29:47.991 "lvol": { 00:29:47.991 "lvol_store_uuid": "08ce8572-6ebb-4446-ad6a-dbbcc4684bae", 00:29:47.991 "base_bdev": "aio_bdev", 00:29:47.991 "thin_provision": false, 00:29:47.991 "num_allocated_clusters": 38, 00:29:47.991 "snapshot": false, 00:29:47.991 "clone": false, 00:29:47.991 "esnap_clone": false 00:29:47.991 } 00:29:47.991 } 00:29:47.992 } 00:29:47.992 ] 00:29:47.992 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:47.992 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:47.992 15:10:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:47.992 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:47.992 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:47.992 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:48.250 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:48.250 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete dc9f6e02-32eb-4757-8135-cee05af80998 00:29:48.509 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08ce8572-6ebb-4446-ad6a-dbbcc4684bae 00:29:48.768 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:48.768 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:49.027 00:29:49.027 real 0m15.705s 00:29:49.027 user 0m15.185s 00:29:49.027 sys 0m1.573s 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:49.027 ************************************ 00:29:49.027 END TEST lvs_grow_clean 00:29:49.027 ************************************ 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:49.027 ************************************ 00:29:49.027 START TEST lvs_grow_dirty 00:29:49.027 ************************************ 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:49.027 15:10:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:49.286 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:49.286 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:49.545 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 lvol 150 00:29:49.804 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5c7c2b84-ecf7-415f-973e-989c710a16eb 00:29:49.804 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:29:49.804 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:50.062 [2024-12-11 15:10:42.903492] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:50.062 [2024-12-11 15:10:42.903612] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:50.062 true 00:29:50.063 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:29:50.063 15:10:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:50.321 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:50.321 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:50.321 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5c7c2b84-ecf7-415f-973e-989c710a16eb 00:29:50.580 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:50.839 [2024-12-11 15:10:43.659919] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3308734 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3308734 /var/tmp/bdevperf.sock 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3308734 ']' 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:50.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.839 15:10:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:51.097 [2024-12-11 15:10:43.907589] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:29:51.097 [2024-12-11 15:10:43.907640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3308734 ] 00:29:51.097 [2024-12-11 15:10:43.985462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.097 [2024-12-11 15:10:44.026905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.097 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.097 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:51.097 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:51.663 Nvme0n1 00:29:51.663 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:51.921 [ 00:29:51.921 { 00:29:51.921 "name": "Nvme0n1", 00:29:51.921 "aliases": [ 00:29:51.921 "5c7c2b84-ecf7-415f-973e-989c710a16eb" 00:29:51.921 ], 00:29:51.921 "product_name": "NVMe disk", 00:29:51.921 "block_size": 4096, 00:29:51.921 "num_blocks": 38912, 00:29:51.921 "uuid": "5c7c2b84-ecf7-415f-973e-989c710a16eb", 00:29:51.921 "numa_id": 1, 00:29:51.921 "assigned_rate_limits": { 00:29:51.921 "rw_ios_per_sec": 0, 00:29:51.921 "rw_mbytes_per_sec": 0, 00:29:51.921 "r_mbytes_per_sec": 0, 00:29:51.921 "w_mbytes_per_sec": 0 00:29:51.921 }, 00:29:51.921 "claimed": false, 00:29:51.921 "zoned": false, 00:29:51.921 "supported_io_types": { 00:29:51.921 "read": true, 00:29:51.921 "write": true, 00:29:51.921 "unmap": true, 00:29:51.921 "flush": true, 00:29:51.921 "reset": true, 00:29:51.921 "nvme_admin": true, 00:29:51.921 "nvme_io": true, 00:29:51.921 "nvme_io_md": false, 00:29:51.921 "write_zeroes": true, 00:29:51.921 "zcopy": false, 00:29:51.921 "get_zone_info": false, 00:29:51.921 "zone_management": false, 00:29:51.921 "zone_append": false, 00:29:51.921 "compare": true, 00:29:51.921 "compare_and_write": true, 00:29:51.921 "abort": true, 00:29:51.921 "seek_hole": false, 00:29:51.921 "seek_data": false, 00:29:51.921 "copy": true, 00:29:51.921 "nvme_iov_md": false 00:29:51.921 }, 00:29:51.921 "memory_domains": [ 00:29:51.921 { 00:29:51.921 "dma_device_id": "system", 00:29:51.921 "dma_device_type": 1 00:29:51.921 } 00:29:51.921 ], 00:29:51.921 "driver_specific": { 00:29:51.921 "nvme": [ 00:29:51.921 { 00:29:51.921 "trid": { 00:29:51.921 "trtype": "TCP", 00:29:51.921 "adrfam": "IPv4", 00:29:51.921 "traddr": "10.0.0.2", 00:29:51.921 "trsvcid": "4420", 00:29:51.921 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:51.921 }, 00:29:51.921 "ctrlr_data": { 00:29:51.921 "cntlid": 1, 00:29:51.921 "vendor_id": "0x8086", 00:29:51.921 "model_number": "SPDK bdev Controller", 00:29:51.921 "serial_number": "SPDK0", 00:29:51.922 "firmware_revision": "25.01", 00:29:51.922 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:51.922 "oacs": { 00:29:51.922 "security": 0, 00:29:51.922 "format": 0, 00:29:51.922 "firmware": 0, 00:29:51.922 "ns_manage": 0 00:29:51.922 }, 00:29:51.922 "multi_ctrlr": true, 00:29:51.922 "ana_reporting": false 00:29:51.922 }, 00:29:51.922 "vs": { 00:29:51.922 "nvme_version": "1.3" 00:29:51.922 }, 00:29:51.922 "ns_data": { 00:29:51.922 "id": 1, 00:29:51.922 "can_share": true 00:29:51.922 } 00:29:51.922 } 00:29:51.922 ], 00:29:51.922 "mp_policy": "active_passive" 00:29:51.922 } 00:29:51.922 } 00:29:51.922 ] 00:29:51.922 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3308957 00:29:51.922 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:51.922 15:10:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:51.922 Running I/O for 10 seconds... 00:29:52.855 Latency(us) 00:29:52.855 [2024-12-11T14:10:45.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.855 Nvme0n1 : 1.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:52.855 [2024-12-11T14:10:45.903Z] =================================================================================================================== 00:29:52.855 [2024-12-11T14:10:45.903Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:52.855 00:29:53.789 15:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:29:53.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.790 Nvme0n1 : 2.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:53.790 [2024-12-11T14:10:46.838Z] =================================================================================================================== 00:29:53.790 [2024-12-11T14:10:46.838Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:29:53.790 00:29:54.048 true 00:29:54.048 15:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:29:54.048 15:10:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:54.307 15:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:54.307 15:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:54.307 15:10:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3308957 00:29:54.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.873 Nvme0n1 : 3.00 22786.67 89.01 0.00 0.00 0.00 0.00 0.00 00:29:54.873 [2024-12-11T14:10:47.921Z] =================================================================================================================== 00:29:54.873 [2024-12-11T14:10:47.921Z] Total : 22786.67 89.01 0.00 0.00 0.00 0.00 0.00 00:29:54.873 00:29:55.853 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.853 Nvme0n1 : 4.00 22836.75 89.21 0.00 0.00 0.00 0.00 0.00 00:29:55.853 [2024-12-11T14:10:48.901Z] =================================================================================================================== 00:29:55.853 [2024-12-11T14:10:48.901Z] Total : 22836.75 89.21 0.00 0.00 0.00 0.00 0.00 00:29:55.853 00:29:56.834 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.834 Nvme0n1 : 5.00 22917.60 89.52 0.00 0.00 0.00 0.00 0.00 00:29:56.834 [2024-12-11T14:10:49.882Z] =================================================================================================================== 00:29:56.834 [2024-12-11T14:10:49.882Z] Total : 22917.60 89.52 0.00 0.00 0.00 0.00 0.00 00:29:56.834 00:29:58.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.209 Nvme0n1 : 6.00 22950.33 89.65 0.00 0.00 0.00 0.00 0.00 00:29:58.209 [2024-12-11T14:10:51.257Z] =================================================================================================================== 00:29:58.209 [2024-12-11T14:10:51.257Z] Total : 22950.33 89.65 0.00 0.00 0.00 0.00 0.00 00:29:58.209 00:29:59.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.144 Nvme0n1 : 7.00 22991.86 89.81 0.00 0.00 0.00 0.00 0.00 00:29:59.144 [2024-12-11T14:10:52.192Z] =================================================================================================================== 00:29:59.144 [2024-12-11T14:10:52.192Z] Total : 22991.86 89.81 0.00 0.00 0.00 0.00 0.00 00:29:59.144 00:30:00.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:00.078 Nvme0n1 : 8.00 23023.00 89.93 0.00 0.00 0.00 0.00 0.00 00:30:00.078 [2024-12-11T14:10:53.126Z] =================================================================================================================== 00:30:00.078 [2024-12-11T14:10:53.126Z] Total : 23023.00 89.93 0.00 0.00 0.00 0.00 0.00 00:30:00.078 00:30:01.013 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.013 Nvme0n1 : 9.00 23061.33 90.08 0.00 0.00 0.00 0.00 0.00 00:30:01.013 [2024-12-11T14:10:54.061Z] =================================================================================================================== 00:30:01.013 [2024-12-11T14:10:54.061Z] Total : 23061.33 90.08 0.00 0.00 0.00 0.00 0.00 00:30:01.013 00:30:01.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.948 Nvme0n1 : 10.00 23076.40 90.14 0.00 0.00 0.00 0.00 0.00 00:30:01.948 [2024-12-11T14:10:54.996Z] =================================================================================================================== 00:30:01.948 [2024-12-11T14:10:54.996Z] Total : 23076.40 90.14 0.00 0.00 0.00 0.00 0.00 00:30:01.948 00:30:01.948 00:30:01.948 Latency(us) 00:30:01.948 [2024-12-11T14:10:54.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:01.948 Nvme0n1 : 10.00 23070.50 90.12 0.00 0.00 5544.65 3419.27 28379.94 00:30:01.948 [2024-12-11T14:10:54.996Z] =================================================================================================================== 00:30:01.948 [2024-12-11T14:10:54.996Z] Total : 23070.50 90.12 0.00 0.00 5544.65 3419.27 28379.94 00:30:01.948 { 00:30:01.948 "results": [ 00:30:01.948 { 00:30:01.948 "job": "Nvme0n1", 00:30:01.948 "core_mask": "0x2", 00:30:01.948 "workload": "randwrite", 00:30:01.948 "status": "finished", 00:30:01.948 "queue_depth": 128, 00:30:01.948 "io_size": 4096, 00:30:01.948 "runtime": 10.003859, 00:30:01.948 "iops": 23070.497095170973, 00:30:01.948 "mibps": 90.11912927801161, 00:30:01.948 "io_failed": 0, 00:30:01.948 "io_timeout": 0, 00:30:01.948 "avg_latency_us": 5544.64773592562, 00:30:01.948 "min_latency_us": 3419.269565217391, 00:30:01.948 "max_latency_us": 28379.93739130435 00:30:01.948 } 00:30:01.948 ], 00:30:01.948 "core_count": 1 00:30:01.948 } 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3308734 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3308734 ']' 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3308734 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3308734 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3308734' 00:30:01.948 killing process with pid 3308734 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3308734 00:30:01.948 Received shutdown signal, test time was about 10.000000 seconds 00:30:01.948 00:30:01.948 Latency(us) 00:30:01.948 [2024-12-11T14:10:54.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.948 [2024-12-11T14:10:54.996Z] =================================================================================================================== 00:30:01.948 [2024-12-11T14:10:54.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:01.948 15:10:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3308734 00:30:02.206 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.464 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3305872 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3305872 00:30:02.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3305872 Killed "${NVMF_APP[@]}" "$@" 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3310763 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3310763 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3310763 ']' 00:30:02.722 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:02.981 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.981 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:02.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:02.981 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.981 15:10:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:02.981 [2024-12-11 15:10:55.811312] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:02.981 [2024-12-11 15:10:55.812279] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:02.981 [2024-12-11 15:10:55.812318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:02.981 [2024-12-11 15:10:55.893891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.981 [2024-12-11 15:10:55.932135] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:02.981 [2024-12-11 15:10:55.932172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:02.981 [2024-12-11 15:10:55.932179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:02.981 [2024-12-11 15:10:55.932185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:02.981 [2024-12-11 15:10:55.932189] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:02.981 [2024-12-11 15:10:55.932707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.981 [2024-12-11 15:10:56.000715] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:02.981 [2024-12-11 15:10:56.000930] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:03.240 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:03.240 [2024-12-11 15:10:56.250168] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:03.240 [2024-12-11 15:10:56.250421] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:03.240 [2024-12-11 15:10:56.250507] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5c7c2b84-ecf7-415f-973e-989c710a16eb 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5c7c2b84-ecf7-415f-973e-989c710a16eb 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:03.499 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 5c7c2b84-ecf7-415f-973e-989c710a16eb -t 2000 00:30:03.757 [ 00:30:03.757 { 00:30:03.757 "name": "5c7c2b84-ecf7-415f-973e-989c710a16eb", 00:30:03.757 "aliases": [ 00:30:03.757 "lvs/lvol" 00:30:03.757 ], 00:30:03.757 "product_name": "Logical Volume", 00:30:03.757 "block_size": 4096, 00:30:03.757 "num_blocks": 38912, 00:30:03.757 "uuid": "5c7c2b84-ecf7-415f-973e-989c710a16eb", 00:30:03.757 "assigned_rate_limits": { 00:30:03.757 "rw_ios_per_sec": 0, 00:30:03.757 "rw_mbytes_per_sec": 0, 00:30:03.757 "r_mbytes_per_sec": 0, 00:30:03.757 "w_mbytes_per_sec": 0 00:30:03.757 }, 00:30:03.757 "claimed": false, 00:30:03.757 "zoned": false, 00:30:03.757 "supported_io_types": { 00:30:03.757 "read": true, 00:30:03.757 "write": true, 00:30:03.757 "unmap": true, 00:30:03.757 "flush": false, 00:30:03.757 "reset": true, 00:30:03.757 "nvme_admin": false, 00:30:03.757 "nvme_io": false, 00:30:03.757 "nvme_io_md": false, 00:30:03.757 "write_zeroes": true, 00:30:03.757 "zcopy": false, 00:30:03.757 "get_zone_info": false, 00:30:03.757 "zone_management": false, 00:30:03.757 "zone_append": false, 00:30:03.757 "compare": false, 00:30:03.757 "compare_and_write": false, 00:30:03.757 "abort": false, 00:30:03.757 "seek_hole": true, 00:30:03.757 "seek_data": true, 00:30:03.757 "copy": false, 00:30:03.757 "nvme_iov_md": false 00:30:03.757 }, 00:30:03.757 "driver_specific": { 00:30:03.757 "lvol": { 00:30:03.757 "lvol_store_uuid": "9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633", 00:30:03.757 "base_bdev": "aio_bdev", 00:30:03.758 "thin_provision": false, 00:30:03.758 "num_allocated_clusters": 38, 00:30:03.758 "snapshot": false, 00:30:03.758 "clone": false, 00:30:03.758 "esnap_clone": false 00:30:03.758 } 00:30:03.758 } 00:30:03.758 } 00:30:03.758 ] 00:30:03.758 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:03.758 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:03.758 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:04.016 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:04.016 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:04.016 15:10:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:04.016 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:04.016 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:04.275 [2024-12-11 15:10:57.201195] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:30:04.275 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:04.533 request: 00:30:04.533 { 00:30:04.533 "uuid": "9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633", 00:30:04.533 "method": "bdev_lvol_get_lvstores", 00:30:04.533 "req_id": 1 00:30:04.533 } 00:30:04.533 Got JSON-RPC error response 00:30:04.533 response: 00:30:04.533 { 00:30:04.533 "code": -19, 00:30:04.533 "message": "No such device" 00:30:04.533 } 00:30:04.533 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:04.533 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:04.533 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:04.533 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:04.533 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:04.792 aio_bdev 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5c7c2b84-ecf7-415f-973e-989c710a16eb 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=5c7c2b84-ecf7-415f-973e-989c710a16eb 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:04.792 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:05.050 15:10:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 5c7c2b84-ecf7-415f-973e-989c710a16eb -t 2000 00:30:05.050 [ 00:30:05.050 { 00:30:05.050 "name": "5c7c2b84-ecf7-415f-973e-989c710a16eb", 00:30:05.050 "aliases": [ 00:30:05.050 "lvs/lvol" 00:30:05.050 ], 00:30:05.050 "product_name": "Logical Volume", 00:30:05.050 "block_size": 4096, 00:30:05.050 "num_blocks": 38912, 00:30:05.050 "uuid": "5c7c2b84-ecf7-415f-973e-989c710a16eb", 00:30:05.050 "assigned_rate_limits": { 00:30:05.050 "rw_ios_per_sec": 0, 00:30:05.050 "rw_mbytes_per_sec": 0, 00:30:05.050 "r_mbytes_per_sec": 0, 00:30:05.050 "w_mbytes_per_sec": 0 00:30:05.050 }, 00:30:05.050 "claimed": false, 00:30:05.050 "zoned": false, 00:30:05.050 "supported_io_types": { 00:30:05.050 "read": true, 00:30:05.050 "write": true, 00:30:05.050 "unmap": true, 00:30:05.050 "flush": false, 00:30:05.050 "reset": true, 00:30:05.050 "nvme_admin": false, 00:30:05.050 "nvme_io": false, 00:30:05.050 "nvme_io_md": false, 00:30:05.050 "write_zeroes": true, 00:30:05.050 "zcopy": false, 00:30:05.050 "get_zone_info": false, 00:30:05.050 "zone_management": false, 00:30:05.050 "zone_append": false, 00:30:05.050 "compare": false, 00:30:05.050 "compare_and_write": false, 00:30:05.050 "abort": false, 00:30:05.050 "seek_hole": true, 00:30:05.050 "seek_data": true, 00:30:05.050 "copy": false, 00:30:05.050 "nvme_iov_md": false 00:30:05.050 }, 00:30:05.050 "driver_specific": { 00:30:05.050 "lvol": { 00:30:05.050 "lvol_store_uuid": "9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633", 00:30:05.050 "base_bdev": "aio_bdev", 00:30:05.050 "thin_provision": false, 00:30:05.050 "num_allocated_clusters": 38, 00:30:05.050 "snapshot": false, 00:30:05.050 "clone": false, 00:30:05.050 "esnap_clone": false 00:30:05.050 } 00:30:05.050 } 00:30:05.050 } 00:30:05.050 ] 00:30:05.050 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:05.050 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:05.050 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:05.309 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:05.309 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:05.309 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:05.567 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:05.567 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 5c7c2b84-ecf7-415f-973e-989c710a16eb 00:30:05.826 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9be1dc3d-1a5a-4cfe-a314-fe4d2aaaa633 00:30:06.085 15:10:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:06.085 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:06.085 00:30:06.085 real 0m17.203s 00:30:06.085 user 0m34.525s 00:30:06.085 sys 0m3.897s 00:30:06.085 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.085 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:06.085 ************************************ 00:30:06.085 END TEST lvs_grow_dirty 00:30:06.085 ************************************ 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:06.345 nvmf_trace.0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.345 rmmod nvme_tcp 00:30:06.345 rmmod nvme_fabrics 00:30:06.345 rmmod nvme_keyring 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3310763 ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3310763 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3310763 ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3310763 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3310763 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3310763' 00:30:06.345 killing process with pid 3310763 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3310763 00:30:06.345 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3310763 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.605 15:10:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:08.533 00:30:08.533 real 0m41.936s 00:30:08.533 user 0m52.132s 00:30:08.533 sys 0m10.302s 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:08.533 ************************************ 00:30:08.533 END TEST nvmf_lvs_grow 00:30:08.533 ************************************ 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:08.533 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:08.793 ************************************ 00:30:08.793 START TEST nvmf_bdev_io_wait 00:30:08.793 ************************************ 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:08.793 * Looking for test storage... 00:30:08.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.793 --rc genhtml_branch_coverage=1 00:30:08.793 --rc genhtml_function_coverage=1 00:30:08.793 --rc genhtml_legend=1 00:30:08.793 --rc geninfo_all_blocks=1 00:30:08.793 --rc geninfo_unexecuted_blocks=1 00:30:08.793 00:30:08.793 ' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.793 --rc genhtml_branch_coverage=1 00:30:08.793 --rc genhtml_function_coverage=1 00:30:08.793 --rc genhtml_legend=1 00:30:08.793 --rc geninfo_all_blocks=1 00:30:08.793 --rc geninfo_unexecuted_blocks=1 00:30:08.793 00:30:08.793 ' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.793 --rc genhtml_branch_coverage=1 00:30:08.793 --rc genhtml_function_coverage=1 00:30:08.793 --rc genhtml_legend=1 00:30:08.793 --rc geninfo_all_blocks=1 00:30:08.793 --rc geninfo_unexecuted_blocks=1 00:30:08.793 00:30:08.793 ' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:08.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.793 --rc genhtml_branch_coverage=1 00:30:08.793 --rc genhtml_function_coverage=1 00:30:08.793 --rc genhtml_legend=1 00:30:08.793 --rc geninfo_all_blocks=1 00:30:08.793 --rc geninfo_unexecuted_blocks=1 00:30:08.793 00:30:08.793 ' 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.793 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.794 15:11:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.365 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:15.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:15.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:15.366 Found net devices under 0000:86:00.0: cvl_0_0 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:15.366 Found net devices under 0000:86:00.1: cvl_0_1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:15.366 00:30:15.366 --- 10.0.0.2 ping statistics --- 00:30:15.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.366 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:30:15.366 00:30:15.366 --- 10.0.0.1 ping statistics --- 00:30:15.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.366 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.366 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3314850 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3314850 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3314850 ']' 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 [2024-12-11 15:11:07.820232] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.367 [2024-12-11 15:11:07.821147] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:15.367 [2024-12-11 15:11:07.821184] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.367 [2024-12-11 15:11:07.901352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.367 [2024-12-11 15:11:07.942765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.367 [2024-12-11 15:11:07.942802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.367 [2024-12-11 15:11:07.942810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.367 [2024-12-11 15:11:07.942816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.367 [2024-12-11 15:11:07.942823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.367 [2024-12-11 15:11:07.944245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.367 [2024-12-11 15:11:07.944350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.367 [2024-12-11 15:11:07.944464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.367 [2024-12-11 15:11:07.944465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.367 [2024-12-11 15:11:07.944731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.367 15:11:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 [2024-12-11 15:11:08.077115] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:15.367 [2024-12-11 15:11:08.077709] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:15.367 [2024-12-11 15:11:08.077720] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:15.367 [2024-12-11 15:11:08.077872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 [2024-12-11 15:11:08.089047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 Malloc0 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:15.367 [2024-12-11 15:11:08.165405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3314872 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3314874 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.367 { 00:30:15.367 "params": { 00:30:15.367 "name": "Nvme$subsystem", 00:30:15.367 "trtype": "$TEST_TRANSPORT", 00:30:15.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.367 "adrfam": "ipv4", 00:30:15.367 "trsvcid": "$NVMF_PORT", 00:30:15.367 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.367 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.367 "hdgst": ${hdgst:-false}, 00:30:15.367 "ddgst": ${ddgst:-false} 00:30:15.367 }, 00:30:15.367 "method": "bdev_nvme_attach_controller" 00:30:15.367 } 00:30:15.367 EOF 00:30:15.367 )") 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3314876 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:15.367 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.367 { 00:30:15.367 "params": { 00:30:15.367 "name": "Nvme$subsystem", 00:30:15.367 "trtype": "$TEST_TRANSPORT", 00:30:15.367 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.367 "adrfam": "ipv4", 00:30:15.367 "trsvcid": "$NVMF_PORT", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.368 "hdgst": ${hdgst:-false}, 00:30:15.368 "ddgst": ${ddgst:-false} 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 } 00:30:15.368 EOF 00:30:15.368 )") 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3314879 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.368 { 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme$subsystem", 00:30:15.368 "trtype": "$TEST_TRANSPORT", 00:30:15.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "$NVMF_PORT", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.368 "hdgst": ${hdgst:-false}, 00:30:15.368 "ddgst": ${ddgst:-false} 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 } 00:30:15.368 EOF 00:30:15.368 )") 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:15.368 { 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme$subsystem", 00:30:15.368 "trtype": "$TEST_TRANSPORT", 00:30:15.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "$NVMF_PORT", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:15.368 "hdgst": ${hdgst:-false}, 00:30:15.368 "ddgst": ${ddgst:-false} 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 } 00:30:15.368 EOF 00:30:15.368 )") 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3314872 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme1", 00:30:15.368 "trtype": "tcp", 00:30:15.368 "traddr": "10.0.0.2", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "4420", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.368 "hdgst": false, 00:30:15.368 "ddgst": false 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 }' 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme1", 00:30:15.368 "trtype": "tcp", 00:30:15.368 "traddr": "10.0.0.2", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "4420", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.368 "hdgst": false, 00:30:15.368 "ddgst": false 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 }' 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme1", 00:30:15.368 "trtype": "tcp", 00:30:15.368 "traddr": "10.0.0.2", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "4420", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.368 "hdgst": false, 00:30:15.368 "ddgst": false 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 }' 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:30:15.368 15:11:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:15.368 "params": { 00:30:15.368 "name": "Nvme1", 00:30:15.368 "trtype": "tcp", 00:30:15.368 "traddr": "10.0.0.2", 00:30:15.368 "adrfam": "ipv4", 00:30:15.368 "trsvcid": "4420", 00:30:15.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:15.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:15.368 "hdgst": false, 00:30:15.368 "ddgst": false 00:30:15.368 }, 00:30:15.368 "method": "bdev_nvme_attach_controller" 00:30:15.368 }' 00:30:15.368 [2024-12-11 15:11:08.216052] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:15.368 [2024-12-11 15:11:08.216102] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:15.368 [2024-12-11 15:11:08.216367] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:15.368 [2024-12-11 15:11:08.216412] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:15.368 [2024-12-11 15:11:08.219351] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:15.368 [2024-12-11 15:11:08.219393] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:15.368 [2024-12-11 15:11:08.221896] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:15.368 [2024-12-11 15:11:08.221940] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:15.368 [2024-12-11 15:11:08.409919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.626 [2024-12-11 15:11:08.451689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:30:15.626 [2024-12-11 15:11:08.503084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.626 [2024-12-11 15:11:08.553206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:30:15.626 [2024-12-11 15:11:08.563332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.626 [2024-12-11 15:11:08.604091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:30:15.626 [2024-12-11 15:11:08.622872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.626 [2024-12-11 15:11:08.664536] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:30:15.883 Running I/O for 1 seconds... 00:30:15.883 Running I/O for 1 seconds... 00:30:15.883 Running I/O for 1 seconds... 00:30:15.883 Running I/O for 1 seconds... 00:30:16.813 234440.00 IOPS, 915.78 MiB/s 00:30:16.813 Latency(us) 00:30:16.813 [2024-12-11T14:11:09.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.813 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:16.813 Nvme1n1 : 1.00 234076.29 914.36 0.00 0.00 543.96 227.06 1531.55 00:30:16.813 [2024-12-11T14:11:09.861Z] =================================================================================================================== 00:30:16.813 [2024-12-11T14:11:09.861Z] Total : 234076.29 914.36 0.00 0.00 543.96 227.06 1531.55 00:30:16.813 8196.00 IOPS, 32.02 MiB/s 00:30:16.814 Latency(us) 00:30:16.814 [2024-12-11T14:11:09.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.814 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:16.814 Nvme1n1 : 1.02 8204.54 32.05 0.00 0.00 15470.53 1531.55 23592.96 00:30:16.814 [2024-12-11T14:11:09.862Z] =================================================================================================================== 00:30:16.814 [2024-12-11T14:11:09.862Z] Total : 8204.54 32.05 0.00 0.00 15470.53 1531.55 23592.96 00:30:16.814 13002.00 IOPS, 50.79 MiB/s 00:30:16.814 Latency(us) 00:30:16.814 [2024-12-11T14:11:09.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:16.814 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:16.814 Nvme1n1 : 1.01 13069.40 51.05 0.00 0.00 9765.42 4131.62 14474.91 00:30:16.814 [2024-12-11T14:11:09.862Z] =================================================================================================================== 00:30:16.814 [2024-12-11T14:11:09.862Z] Total : 13069.40 51.05 0.00 0.00 9765.42 4131.62 14474.91 00:30:17.071 15:11:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3314874 00:30:17.071 8615.00 IOPS, 33.65 MiB/s 00:30:17.071 Latency(us) 00:30:17.071 [2024-12-11T14:11:10.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:17.071 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:17.071 Nvme1n1 : 1.00 8706.15 34.01 0.00 0.00 14671.33 3177.07 32824.99 00:30:17.071 [2024-12-11T14:11:10.119Z] =================================================================================================================== 00:30:17.071 [2024-12-11T14:11:10.119Z] Total : 8706.15 34.01 0.00 0.00 14671.33 3177.07 32824.99 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3314876 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3314879 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:17.071 rmmod nvme_tcp 00:30:17.071 rmmod nvme_fabrics 00:30:17.071 rmmod nvme_keyring 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3314850 ']' 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3314850 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3314850 ']' 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3314850 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:17.071 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3314850 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3314850' 00:30:17.330 killing process with pid 3314850 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3314850 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3314850 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:17.330 15:11:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:19.865 00:30:19.865 real 0m10.768s 00:30:19.865 user 0m14.866s 00:30:19.865 sys 0m6.456s 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:19.865 ************************************ 00:30:19.865 END TEST nvmf_bdev_io_wait 00:30:19.865 ************************************ 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:19.865 ************************************ 00:30:19.865 START TEST nvmf_queue_depth 00:30:19.865 ************************************ 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:19.865 * Looking for test storage... 00:30:19.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:19.865 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:19.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.866 --rc genhtml_branch_coverage=1 00:30:19.866 --rc genhtml_function_coverage=1 00:30:19.866 --rc genhtml_legend=1 00:30:19.866 --rc geninfo_all_blocks=1 00:30:19.866 --rc geninfo_unexecuted_blocks=1 00:30:19.866 00:30:19.866 ' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:19.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.866 --rc genhtml_branch_coverage=1 00:30:19.866 --rc genhtml_function_coverage=1 00:30:19.866 --rc genhtml_legend=1 00:30:19.866 --rc geninfo_all_blocks=1 00:30:19.866 --rc geninfo_unexecuted_blocks=1 00:30:19.866 00:30:19.866 ' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:19.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.866 --rc genhtml_branch_coverage=1 00:30:19.866 --rc genhtml_function_coverage=1 00:30:19.866 --rc genhtml_legend=1 00:30:19.866 --rc geninfo_all_blocks=1 00:30:19.866 --rc geninfo_unexecuted_blocks=1 00:30:19.866 00:30:19.866 ' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:19.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:19.866 --rc genhtml_branch_coverage=1 00:30:19.866 --rc genhtml_function_coverage=1 00:30:19.866 --rc genhtml_legend=1 00:30:19.866 --rc geninfo_all_blocks=1 00:30:19.866 --rc geninfo_unexecuted_blocks=1 00:30:19.866 00:30:19.866 ' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:30:19.866 15:11:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.433 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:26.434 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:26.434 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:26.434 Found net devices under 0000:86:00.0: cvl_0_0 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:26.434 Found net devices under 0000:86:00.1: cvl_0_1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:30:26.434 00:30:26.434 --- 10.0.0.2 ping statistics --- 00:30:26.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.434 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:30:26.434 00:30:26.434 --- 10.0.0.1 ping statistics --- 00:30:26.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.434 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:26.434 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3318653 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3318653 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3318653 ']' 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 [2024-12-11 15:11:18.661628] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:26.435 [2024-12-11 15:11:18.662569] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:26.435 [2024-12-11 15:11:18.662608] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.435 [2024-12-11 15:11:18.743127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.435 [2024-12-11 15:11:18.783637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.435 [2024-12-11 15:11:18.783670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.435 [2024-12-11 15:11:18.783678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.435 [2024-12-11 15:11:18.783684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.435 [2024-12-11 15:11:18.783689] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.435 [2024-12-11 15:11:18.784227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.435 [2024-12-11 15:11:18.851982] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:26.435 [2024-12-11 15:11:18.852200] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 [2024-12-11 15:11:18.920900] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 Malloc0 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 [2024-12-11 15:11:18.997029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.435 15:11:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3318777 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3318777 /var/tmp/bdevperf.sock 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3318777 ']' 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:26.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 [2024-12-11 15:11:19.049821] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:26.435 [2024-12-11 15:11:19.049866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318777 ] 00:30:26.435 [2024-12-11 15:11:19.126535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.435 [2024-12-11 15:11:19.168305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:26.435 NVMe0n1 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.435 15:11:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.435 Running I/O for 10 seconds... 00:30:28.745 11781.00 IOPS, 46.02 MiB/s [2024-12-11T14:11:22.727Z] 12156.00 IOPS, 47.48 MiB/s [2024-12-11T14:11:23.661Z] 12232.00 IOPS, 47.78 MiB/s [2024-12-11T14:11:24.671Z] 12254.25 IOPS, 47.87 MiB/s [2024-12-11T14:11:25.606Z] 12263.40 IOPS, 47.90 MiB/s [2024-12-11T14:11:26.539Z] 12290.67 IOPS, 48.01 MiB/s [2024-12-11T14:11:27.473Z] 12305.29 IOPS, 48.07 MiB/s [2024-12-11T14:11:28.849Z] 12329.25 IOPS, 48.16 MiB/s [2024-12-11T14:11:29.784Z] 12341.67 IOPS, 48.21 MiB/s [2024-12-11T14:11:29.784Z] 12362.80 IOPS, 48.29 MiB/s 00:30:36.736 Latency(us) 00:30:36.736 [2024-12-11T14:11:29.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.736 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:36.736 Verification LBA range: start 0x0 length 0x4000 00:30:36.736 NVMe0n1 : 10.06 12379.41 48.36 0.00 0.00 82411.79 19603.81 51516.99 00:30:36.736 [2024-12-11T14:11:29.784Z] =================================================================================================================== 00:30:36.736 [2024-12-11T14:11:29.784Z] Total : 12379.41 48.36 0.00 0.00 82411.79 19603.81 51516.99 00:30:36.736 { 00:30:36.736 "results": [ 00:30:36.736 { 00:30:36.736 "job": "NVMe0n1", 00:30:36.736 "core_mask": "0x1", 00:30:36.736 "workload": "verify", 00:30:36.736 "status": "finished", 00:30:36.736 "verify_range": { 00:30:36.736 "start": 0, 00:30:36.736 "length": 16384 00:30:36.736 }, 00:30:36.736 "queue_depth": 1024, 00:30:36.737 "io_size": 4096, 00:30:36.737 "runtime": 10.061871, 00:30:36.737 "iops": 12379.407368669306, 00:30:36.737 "mibps": 48.35706003386448, 00:30:36.737 "io_failed": 0, 00:30:36.737 "io_timeout": 0, 00:30:36.737 "avg_latency_us": 82411.79084772835, 00:30:36.737 "min_latency_us": 19603.812173913044, 00:30:36.737 "max_latency_us": 51516.994782608694 00:30:36.737 } 00:30:36.737 ], 00:30:36.737 "core_count": 1 00:30:36.737 } 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3318777 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3318777 ']' 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3318777 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3318777 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3318777' 00:30:36.737 killing process with pid 3318777 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3318777 00:30:36.737 Received shutdown signal, test time was about 10.000000 seconds 00:30:36.737 00:30:36.737 Latency(us) 00:30:36.737 [2024-12-11T14:11:29.785Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.737 [2024-12-11T14:11:29.785Z] =================================================================================================================== 00:30:36.737 [2024-12-11T14:11:29.785Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3318777 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:36.737 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:36.737 rmmod nvme_tcp 00:30:36.737 rmmod nvme_fabrics 00:30:36.737 rmmod nvme_keyring 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3318653 ']' 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3318653 00:30:36.995 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3318653 ']' 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3318653 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3318653 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3318653' 00:30:36.996 killing process with pid 3318653 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3318653 00:30:36.996 15:11:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3318653 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:36.996 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:30:37.255 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:37.255 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:37.255 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.255 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:37.255 15:11:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:39.158 00:30:39.158 real 0m19.655s 00:30:39.158 user 0m22.644s 00:30:39.158 sys 0m6.267s 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:39.158 ************************************ 00:30:39.158 END TEST nvmf_queue_depth 00:30:39.158 ************************************ 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:39.158 ************************************ 00:30:39.158 START TEST nvmf_target_multipath 00:30:39.158 ************************************ 00:30:39.158 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:30:39.418 * Looking for test storage... 00:30:39.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:39.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.418 --rc genhtml_branch_coverage=1 00:30:39.418 --rc genhtml_function_coverage=1 00:30:39.418 --rc genhtml_legend=1 00:30:39.418 --rc geninfo_all_blocks=1 00:30:39.418 --rc geninfo_unexecuted_blocks=1 00:30:39.418 00:30:39.418 ' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:39.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.418 --rc genhtml_branch_coverage=1 00:30:39.418 --rc genhtml_function_coverage=1 00:30:39.418 --rc genhtml_legend=1 00:30:39.418 --rc geninfo_all_blocks=1 00:30:39.418 --rc geninfo_unexecuted_blocks=1 00:30:39.418 00:30:39.418 ' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:39.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.418 --rc genhtml_branch_coverage=1 00:30:39.418 --rc genhtml_function_coverage=1 00:30:39.418 --rc genhtml_legend=1 00:30:39.418 --rc geninfo_all_blocks=1 00:30:39.418 --rc geninfo_unexecuted_blocks=1 00:30:39.418 00:30:39.418 ' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:39.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:39.418 --rc genhtml_branch_coverage=1 00:30:39.418 --rc genhtml_function_coverage=1 00:30:39.418 --rc genhtml_legend=1 00:30:39.418 --rc geninfo_all_blocks=1 00:30:39.418 --rc geninfo_unexecuted_blocks=1 00:30:39.418 00:30:39.418 ' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.418 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:30:39.419 15:11:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:45.989 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:45.989 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:45.989 Found net devices under 0000:86:00.0: cvl_0_0 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:45.989 Found net devices under 0000:86:00.1: cvl_0_1 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:45.989 15:11:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:45.989 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:45.989 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:45.989 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:45.989 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:45.989 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:45.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:45.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:30:45.990 00:30:45.990 --- 10.0.0.2 ping statistics --- 00:30:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.990 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:45.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:45.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:30:45.990 00:30:45.990 --- 10.0.0.1 ping statistics --- 00:30:45.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:45.990 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:30:45.990 only one NIC for nvmf test 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.990 rmmod nvme_tcp 00:30:45.990 rmmod nvme_fabrics 00:30:45.990 rmmod nvme_keyring 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.990 15:11:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.898 00:30:47.898 real 0m8.352s 00:30:47.898 user 0m1.777s 00:30:47.898 sys 0m4.508s 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:47.898 ************************************ 00:30:47.898 END TEST nvmf_target_multipath 00:30:47.898 ************************************ 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.898 ************************************ 00:30:47.898 START TEST nvmf_zcopy 00:30:47.898 ************************************ 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:30:47.898 * Looking for test storage... 00:30:47.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.898 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:47.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.899 --rc genhtml_branch_coverage=1 00:30:47.899 --rc genhtml_function_coverage=1 00:30:47.899 --rc genhtml_legend=1 00:30:47.899 --rc geninfo_all_blocks=1 00:30:47.899 --rc geninfo_unexecuted_blocks=1 00:30:47.899 00:30:47.899 ' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:47.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.899 --rc genhtml_branch_coverage=1 00:30:47.899 --rc genhtml_function_coverage=1 00:30:47.899 --rc genhtml_legend=1 00:30:47.899 --rc geninfo_all_blocks=1 00:30:47.899 --rc geninfo_unexecuted_blocks=1 00:30:47.899 00:30:47.899 ' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:47.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.899 --rc genhtml_branch_coverage=1 00:30:47.899 --rc genhtml_function_coverage=1 00:30:47.899 --rc genhtml_legend=1 00:30:47.899 --rc geninfo_all_blocks=1 00:30:47.899 --rc geninfo_unexecuted_blocks=1 00:30:47.899 00:30:47.899 ' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:47.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.899 --rc genhtml_branch_coverage=1 00:30:47.899 --rc genhtml_function_coverage=1 00:30:47.899 --rc genhtml_legend=1 00:30:47.899 --rc geninfo_all_blocks=1 00:30:47.899 --rc geninfo_unexecuted_blocks=1 00:30:47.899 00:30:47.899 ' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:30:47.899 15:11:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:54.472 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:54.473 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:54.473 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:54.473 Found net devices under 0000:86:00.0: cvl_0_0 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:54.473 Found net devices under 0000:86:00.1: cvl_0_1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:54.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:54.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:30:54.473 00:30:54.473 --- 10.0.0.2 ping statistics --- 00:30:54.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.473 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:54.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:54.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:30:54.473 00:30:54.473 --- 10.0.0.1 ping statistics --- 00:30:54.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:54.473 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3327439 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3327439 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3327439 ']' 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:54.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:54.473 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.473 [2024-12-11 15:11:46.765649] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:54.473 [2024-12-11 15:11:46.766556] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:54.473 [2024-12-11 15:11:46.766588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:54.473 [2024-12-11 15:11:46.847699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.473 [2024-12-11 15:11:46.888821] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:54.474 [2024-12-11 15:11:46.888856] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:54.474 [2024-12-11 15:11:46.888867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:54.474 [2024-12-11 15:11:46.888874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:54.474 [2024-12-11 15:11:46.888879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:54.474 [2024-12-11 15:11:46.889430] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.474 [2024-12-11 15:11:46.957934] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:54.474 [2024-12-11 15:11:46.958145] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:54.474 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:54.474 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:30:54.474 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:54.474 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:54.474 15:11:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 [2024-12-11 15:11:47.034064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 [2024-12-11 15:11:47.062401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 malloc0 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:54.474 { 00:30:54.474 "params": { 00:30:54.474 "name": "Nvme$subsystem", 00:30:54.474 "trtype": "$TEST_TRANSPORT", 00:30:54.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.474 "adrfam": "ipv4", 00:30:54.474 "trsvcid": "$NVMF_PORT", 00:30:54.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.474 "hdgst": ${hdgst:-false}, 00:30:54.474 "ddgst": ${ddgst:-false} 00:30:54.474 }, 00:30:54.474 "method": "bdev_nvme_attach_controller" 00:30:54.474 } 00:30:54.474 EOF 00:30:54.474 )") 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:30:54.474 15:11:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:54.474 "params": { 00:30:54.474 "name": "Nvme1", 00:30:54.474 "trtype": "tcp", 00:30:54.474 "traddr": "10.0.0.2", 00:30:54.474 "adrfam": "ipv4", 00:30:54.474 "trsvcid": "4420", 00:30:54.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.474 "hdgst": false, 00:30:54.474 "ddgst": false 00:30:54.474 }, 00:30:54.474 "method": "bdev_nvme_attach_controller" 00:30:54.474 }' 00:30:54.474 [2024-12-11 15:11:47.156342] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:30:54.474 [2024-12-11 15:11:47.156388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327558 ] 00:30:54.474 [2024-12-11 15:11:47.231947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.474 [2024-12-11 15:11:47.274517] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.733 Running I/O for 10 seconds... 00:30:56.604 8389.00 IOPS, 65.54 MiB/s [2024-12-11T14:11:51.029Z] 8458.50 IOPS, 66.08 MiB/s [2024-12-11T14:11:51.966Z] 8421.67 IOPS, 65.79 MiB/s [2024-12-11T14:11:52.900Z] 8452.25 IOPS, 66.03 MiB/s [2024-12-11T14:11:53.837Z] 8461.20 IOPS, 66.10 MiB/s [2024-12-11T14:11:54.772Z] 8474.83 IOPS, 66.21 MiB/s [2024-12-11T14:11:55.706Z] 8481.43 IOPS, 66.26 MiB/s [2024-12-11T14:11:56.643Z] 8486.88 IOPS, 66.30 MiB/s [2024-12-11T14:11:58.021Z] 8493.00 IOPS, 66.35 MiB/s [2024-12-11T14:11:58.021Z] 8501.20 IOPS, 66.42 MiB/s 00:31:04.973 Latency(us) 00:31:04.973 [2024-12-11T14:11:58.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:04.973 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:04.973 Verification LBA range: start 0x0 length 0x1000 00:31:04.973 Nvme1n1 : 10.01 8502.82 66.43 0.00 0.00 15010.65 2550.21 21541.40 00:31:04.973 [2024-12-11T14:11:58.021Z] =================================================================================================================== 00:31:04.973 [2024-12-11T14:11:58.021Z] Total : 8502.82 66.43 0.00 0.00 15010.65 2550.21 21541.40 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3329162 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:04.973 { 00:31:04.973 "params": { 00:31:04.973 "name": "Nvme$subsystem", 00:31:04.973 "trtype": "$TEST_TRANSPORT", 00:31:04.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:04.973 "adrfam": "ipv4", 00:31:04.973 "trsvcid": "$NVMF_PORT", 00:31:04.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:04.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:04.973 "hdgst": ${hdgst:-false}, 00:31:04.973 "ddgst": ${ddgst:-false} 00:31:04.973 }, 00:31:04.973 "method": "bdev_nvme_attach_controller" 00:31:04.973 } 00:31:04.973 EOF 00:31:04.973 )") 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:04.973 [2024-12-11 15:11:57.789787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.973 [2024-12-11 15:11:57.789820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:04.973 15:11:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:04.973 "params": { 00:31:04.973 "name": "Nvme1", 00:31:04.973 "trtype": "tcp", 00:31:04.973 "traddr": "10.0.0.2", 00:31:04.973 "adrfam": "ipv4", 00:31:04.973 "trsvcid": "4420", 00:31:04.973 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:04.973 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:04.973 "hdgst": false, 00:31:04.973 "ddgst": false 00:31:04.973 }, 00:31:04.973 "method": "bdev_nvme_attach_controller" 00:31:04.973 }' 00:31:04.973 [2024-12-11 15:11:57.801740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.973 [2024-12-11 15:11:57.801754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.973 [2024-12-11 15:11:57.813736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.973 [2024-12-11 15:11:57.813747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.973 [2024-12-11 15:11:57.825736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.973 [2024-12-11 15:11:57.825746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.973 [2024-12-11 15:11:57.827601] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:31:04.973 [2024-12-11 15:11:57.827645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3329162 ] 00:31:04.974 [2024-12-11 15:11:57.837736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.837748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.849735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.849746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.861735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.861751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.873734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.873745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.885733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.885744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.897733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.897742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.901985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.974 [2024-12-11 15:11:57.909734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.909746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.921734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.921748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.933744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.933761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.943014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.974 [2024-12-11 15:11:57.945735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.945747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.957747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.957766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.969743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.969763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.981742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.981756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:57.993739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:57.993750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:58.005740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:58.005753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:04.974 [2024-12-11 15:11:58.017737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:04.974 [2024-12-11 15:11:58.017748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.029750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.029774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.041738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.041753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.053740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.053756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.065739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.065754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.077736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.077750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.122365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.122385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.133738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.133751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 Running I/O for 5 seconds... 00:31:05.233 [2024-12-11 15:11:58.149418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.149444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.163486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.163506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.178814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.178835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.194453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.194473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.209785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.209805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.222331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.222350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.235262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.235282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.250739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.250759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.265838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.265857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.233 [2024-12-11 15:11:58.277376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.233 [2024-12-11 15:11:58.277397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.291795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.291815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.306929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.306948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.322223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.322242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.337966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.337986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.350401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.350420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.363318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.363337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.378475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.378498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.393606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.393627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.408049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.408070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.423415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.423435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.438573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.438600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.453604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.453624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.467115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.467135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.481932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.481952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.493279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.493298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.507644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.507663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.522599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.522618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.493 [2024-12-11 15:11:58.537746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.493 [2024-12-11 15:11:58.537766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.550611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.550629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.565742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.565761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.578742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.578761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.593268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.593287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.607105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.607124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.622395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.622414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.637568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.637587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.650909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.650931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.665587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.665606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.679723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.679742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.694477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.694495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.710363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.710381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.726177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.726196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.741221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.741240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.755776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.755794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.770742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.770762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.781236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.781255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:05.752 [2024-12-11 15:11:58.795405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:05.752 [2024-12-11 15:11:58.795425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.011 [2024-12-11 15:11:58.810599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.011 [2024-12-11 15:11:58.810618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.011 [2024-12-11 15:11:58.825871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.011 [2024-12-11 15:11:58.825891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.011 [2024-12-11 15:11:58.836589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.011 [2024-12-11 15:11:58.836608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.011 [2024-12-11 15:11:58.851297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.011 [2024-12-11 15:11:58.851316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.866181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.866199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.881761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.881780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.895170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.895189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.909821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.909840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.922391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.922418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.935377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.935396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.950584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.950603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.965784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.965803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.977282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.977300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:58.991765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:58.991785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:59.006880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:59.006902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:59.022082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:59.022102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:59.038120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:59.038139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.012 [2024-12-11 15:11:59.054031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.012 [2024-12-11 15:11:59.054050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.069499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.069518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.084349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.084368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.099452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.099472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.114652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.114672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.130396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.130415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 16511.00 IOPS, 128.99 MiB/s [2024-12-11T14:11:59.319Z] [2024-12-11 15:11:59.146217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.146236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.161762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.161781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.176185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.176205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.191197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.191216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.205782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.205800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.218978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.218998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.233763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.233782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.246181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.246200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.261555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.261574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.274283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.274302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.289436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.289455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.271 [2024-12-11 15:11:59.302563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.271 [2024-12-11 15:11:59.302582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.318147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.318172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.333863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.333883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.345511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.345530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.359796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.359815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.374780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.374799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.389429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.389448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.403171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.403190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.418019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.418037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.434248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.434267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.450021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.450040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.466416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.466434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.481129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.481148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.495543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.495561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.510660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.510679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.525754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.525773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.538571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.538592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.553747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.553767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.530 [2024-12-11 15:11:59.565354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.530 [2024-12-11 15:11:59.565374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.789 [2024-12-11 15:11:59.579962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.789 [2024-12-11 15:11:59.579982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.789 [2024-12-11 15:11:59.594986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.789 [2024-12-11 15:11:59.595007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.609884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.609903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.622699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.622718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.637674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.637693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.649946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.649965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.663322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.663341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.678364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.678383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.693169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.693188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.706540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.706559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.721812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.721831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.732878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.732901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.747656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.747676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.762665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.762685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.777065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.777085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.791296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.791315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.806370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.806388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.822038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.822057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:06.790 [2024-12-11 15:11:59.834701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:06.790 [2024-12-11 15:11:59.834719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.849951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.849971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.860574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.860593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.875420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.875438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.890020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.890038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.905266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.905285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.919280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.919298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.934345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.934365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.949304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.949324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.960714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.960733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.975713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.975733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:11:59.991175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:11:59.991194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.006613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.006642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.022100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.022121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.037938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.037960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.051897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.051918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.067610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.067630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.049 [2024-12-11 15:12:00.082676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.049 [2024-12-11 15:12:00.082695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.097894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.097913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.109484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.109504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.123656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.123675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.138683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.138702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 16467.00 IOPS, 128.65 MiB/s [2024-12-11T14:12:00.356Z] [2024-12-11 15:12:00.153469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.153487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.167386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.167406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.182310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.182328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.197545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.197565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.308 [2024-12-11 15:12:00.208437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.308 [2024-12-11 15:12:00.208456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.223377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.223397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.239086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.239105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.253982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.254001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.264753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.264772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.279614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.279638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.294286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.294305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.309816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.309835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.322972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.322990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.338280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.338299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.309 [2024-12-11 15:12:00.353551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.309 [2024-12-11 15:12:00.353570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.367608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.367627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.382741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.382760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.397977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.397996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.408652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.408671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.423789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.423808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.438781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.438800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.453767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.453787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.466372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.466391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.478964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.478984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.568 [2024-12-11 15:12:00.490096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.568 [2024-12-11 15:12:00.490114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.503722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.503741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.518772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.518791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.533233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.533253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.546304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.546329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.561971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.561990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.573535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.573556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.587411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.587430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.569 [2024-12-11 15:12:00.602794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.569 [2024-12-11 15:12:00.602815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.617548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.617568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.631092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.631112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.641788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.641807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.655870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.655889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.670863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.670882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.685848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.685868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.697445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.697465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.711565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.711585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.726383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.726402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.739050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.739069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.754103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.754122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.769571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.769591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.782710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.782729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.795172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.795192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.805577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.805596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.819587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.819606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.834569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.834589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.849249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.849268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:07.828 [2024-12-11 15:12:00.862509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:07.828 [2024-12-11 15:12:00.862528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.877693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.877713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.891778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.891798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.907108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.907127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.917471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.917490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.931865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.931884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.946459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.946478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.961439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.961458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.974290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.974309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:00.987486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:00.987507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.002437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.002456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.017654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.017676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.031356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.031377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.046489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.046510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.061523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.061542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.074203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.074222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.087719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.087738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.102626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.102645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.117228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.117247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.088 [2024-12-11 15:12:01.132060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.088 [2024-12-11 15:12:01.132080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 16505.00 IOPS, 128.95 MiB/s [2024-12-11T14:12:01.395Z] [2024-12-11 15:12:01.147299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.147319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.162348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.162367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.177724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.177744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.190508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.190528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.203251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.203271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.218634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.218656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.233877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.233896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.246411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.246430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.262294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.262314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.274611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.274631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.289935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.289954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.300252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.300271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.315269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.315288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.330149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.330180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.342500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.342519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.355241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.355261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.370367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.370386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.347 [2024-12-11 15:12:01.385456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.347 [2024-12-11 15:12:01.385476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.605 [2024-12-11 15:12:01.398788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.605 [2024-12-11 15:12:01.398807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.605 [2024-12-11 15:12:01.413957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.605 [2024-12-11 15:12:01.413977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.605 [2024-12-11 15:12:01.424617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.605 [2024-12-11 15:12:01.424637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.439471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.439491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.454946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.454966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.469824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.469843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.484010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.484028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.499101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.499120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.513624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.513643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.526722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.526741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.541704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.541723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.553097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.553115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.568140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.568166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.582994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.583014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.598199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.598224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.613960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.613980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.626801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.626820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.606 [2024-12-11 15:12:01.639388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.606 [2024-12-11 15:12:01.639408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.654652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.654671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.669924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.669943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.681853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.681873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.695331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.695350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.710528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.710546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.725579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.725598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.739814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.739833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.754852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.754871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.769442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.769461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.783901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.783921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.798665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.798684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.814133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.814152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.829677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.829697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.842055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.842074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.855439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.855458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.870269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.870292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.886489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.886509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:08.865 [2024-12-11 15:12:01.901596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:08.865 [2024-12-11 15:12:01.901615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.915732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.915751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.930953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.930973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.945720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.945739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.958342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.958360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.974000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.974020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.986234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.986253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:01.999088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:01.999109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.014368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.014387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.029154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.029179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.042681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.042700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.058110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.058128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.073917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.073937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.086184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.086202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.100012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.100032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.115310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.115330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.130496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.130514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 [2024-12-11 15:12:02.146209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.146233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.124 16502.25 IOPS, 128.92 MiB/s [2024-12-11T14:12:02.172Z] [2024-12-11 15:12:02.161869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.124 [2024-12-11 15:12:02.161889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.175791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.175810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.191369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.191388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.206254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.206273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.221858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.221877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.234905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.234924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.250139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.250163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.265783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.265802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.278311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.278329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.293482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.293501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.307408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.307426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.322913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.322931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.337447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.337466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.350079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.350099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.364045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.364064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.379631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.379651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.394503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.394522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.410069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.410088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.384 [2024-12-11 15:12:02.425761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.384 [2024-12-11 15:12:02.425781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.439990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.440011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.455581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.455602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.470221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.470241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.485536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.485555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.496964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.496984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.511589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.511608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.526205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.526223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.542128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.542147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.558072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.558092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.573889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.573909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.586943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.586964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.602297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.602316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.617904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.617923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.631456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.631476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.646198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.646217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.661518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.661538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.643 [2024-12-11 15:12:02.675834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.643 [2024-12-11 15:12:02.675854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.691281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.691300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.706427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.706447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.722081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.722100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.737741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.737760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.751506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.751525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.766584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.766603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.782273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.782292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.798241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.798266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.813629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.813649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.824976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.824995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.839518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.839537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.854391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.854410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.869535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.869554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.881919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.881938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.895273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.895292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.910139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.910163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.925834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.925852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:09.902 [2024-12-11 15:12:02.939107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:09.902 [2024-12-11 15:12:02.939126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:02.953930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:02.953951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:02.964840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:02.964859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:02.980020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:02.980039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:02.994859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:02.994877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.006213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.006232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.019434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.019453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.034366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.034384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.050066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.050085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.066106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.066125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.081709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.081730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.095324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.095344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.110234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.161 [2024-12-11 15:12:03.110253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.161 [2024-12-11 15:12:03.125888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.125906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 [2024-12-11 15:12:03.139422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.139441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 16492.60 IOPS, 128.85 MiB/s [2024-12-11T14:12:03.210Z] [2024-12-11 15:12:03.153826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.153845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 00:31:10.162 Latency(us) 00:31:10.162 [2024-12-11T14:12:03.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.162 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:10.162 Nvme1n1 : 5.01 16496.04 128.88 0.00 0.00 7751.84 2008.82 13107.20 00:31:10.162 [2024-12-11T14:12:03.210Z] =================================================================================================================== 00:31:10.162 [2024-12-11T14:12:03.210Z] Total : 16496.04 128.88 0.00 0.00 7751.84 2008.82 13107.20 00:31:10.162 [2024-12-11 15:12:03.165740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.165758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 [2024-12-11 15:12:03.177744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.177759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 [2024-12-11 15:12:03.189753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.189781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.162 [2024-12-11 15:12:03.201743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.162 [2024-12-11 15:12:03.201760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.213745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.213760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.225738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.225752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.237742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.237756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.249739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.249752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.261741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.261765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.273734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.273744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.285738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.285751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.297735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.297747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 [2024-12-11 15:12:03.309735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:10.420 [2024-12-11 15:12:03.309746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:10.420 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3329162) - No such process 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3329162 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.420 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.421 delay0 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.421 15:12:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:10.421 [2024-12-11 15:12:03.459997] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:18.595 [2024-12-11 15:12:10.588658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba4e50 is same with the state(6) to be set 00:31:18.595 [2024-12-11 15:12:10.588697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba4e50 is same with the state(6) to be set 00:31:18.595 Initializing NVMe Controllers 00:31:18.595 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.595 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.595 Initialization complete. Launching workers. 00:31:18.595 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 29413 00:31:18.595 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 29522, failed to submit 127 00:31:18.595 success 29447, unsuccessful 75, failed 0 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:18.595 rmmod nvme_tcp 00:31:18.595 rmmod nvme_fabrics 00:31:18.595 rmmod nvme_keyring 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3327439 ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3327439 ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3327439' 00:31:18.595 killing process with pid 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3327439 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.595 15:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.975 00:31:19.975 real 0m32.343s 00:31:19.975 user 0m41.796s 00:31:19.975 sys 0m12.996s 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:19.975 ************************************ 00:31:19.975 END TEST nvmf_zcopy 00:31:19.975 ************************************ 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.975 15:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:20.236 ************************************ 00:31:20.236 START TEST nvmf_nmic 00:31:20.236 ************************************ 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:20.236 * Looking for test storage... 00:31:20.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:20.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.236 --rc genhtml_branch_coverage=1 00:31:20.236 --rc genhtml_function_coverage=1 00:31:20.236 --rc genhtml_legend=1 00:31:20.236 --rc geninfo_all_blocks=1 00:31:20.236 --rc geninfo_unexecuted_blocks=1 00:31:20.236 00:31:20.236 ' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:20.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.236 --rc genhtml_branch_coverage=1 00:31:20.236 --rc genhtml_function_coverage=1 00:31:20.236 --rc genhtml_legend=1 00:31:20.236 --rc geninfo_all_blocks=1 00:31:20.236 --rc geninfo_unexecuted_blocks=1 00:31:20.236 00:31:20.236 ' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:20.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.236 --rc genhtml_branch_coverage=1 00:31:20.236 --rc genhtml_function_coverage=1 00:31:20.236 --rc genhtml_legend=1 00:31:20.236 --rc geninfo_all_blocks=1 00:31:20.236 --rc geninfo_unexecuted_blocks=1 00:31:20.236 00:31:20.236 ' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:20.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:20.236 --rc genhtml_branch_coverage=1 00:31:20.236 --rc genhtml_function_coverage=1 00:31:20.236 --rc genhtml_legend=1 00:31:20.236 --rc geninfo_all_blocks=1 00:31:20.236 --rc geninfo_unexecuted_blocks=1 00:31:20.236 00:31:20.236 ' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:20.236 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:31:20.237 15:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:26.822 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:26.822 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:26.822 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:26.823 Found net devices under 0000:86:00.0: cvl_0_0 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:26.823 Found net devices under 0000:86:00.1: cvl_0_1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:26.823 15:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:26.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:26.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:31:26.823 00:31:26.823 --- 10.0.0.2 ping statistics --- 00:31:26.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.823 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:26.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:26.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:31:26.823 00:31:26.823 --- 10.0.0.1 ping statistics --- 00:31:26.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:26.823 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3335256 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3335256 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3335256 ']' 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.823 [2024-12-11 15:12:19.242038] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:26.823 [2024-12-11 15:12:19.242955] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:31:26.823 [2024-12-11 15:12:19.242988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:26.823 [2024-12-11 15:12:19.322621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:26.823 [2024-12-11 15:12:19.366245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:26.823 [2024-12-11 15:12:19.366277] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:26.823 [2024-12-11 15:12:19.366285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:26.823 [2024-12-11 15:12:19.366291] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:26.823 [2024-12-11 15:12:19.366296] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:26.823 [2024-12-11 15:12:19.367710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.823 [2024-12-11 15:12:19.367743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.823 [2024-12-11 15:12:19.367851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.823 [2024-12-11 15:12:19.367852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:26.823 [2024-12-11 15:12:19.436027] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:26.823 [2024-12-11 15:12:19.437109] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:26.823 [2024-12-11 15:12:19.437227] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:26.823 [2024-12-11 15:12:19.437581] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:26.823 [2024-12-11 15:12:19.437624] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:26.823 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 [2024-12-11 15:12:19.504671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 Malloc0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 [2024-12-11 15:12:19.584904] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:26.824 test case1: single bdev can't be used in multiple subsystems 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 [2024-12-11 15:12:19.612376] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:26.824 [2024-12-11 15:12:19.612398] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:26.824 [2024-12-11 15:12:19.612406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:26.824 request: 00:31:26.824 { 00:31:26.824 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:26.824 "namespace": { 00:31:26.824 "bdev_name": "Malloc0", 00:31:26.824 "no_auto_visible": false, 00:31:26.824 "hide_metadata": false 00:31:26.824 }, 00:31:26.824 "method": "nvmf_subsystem_add_ns", 00:31:26.824 "req_id": 1 00:31:26.824 } 00:31:26.824 Got JSON-RPC error response 00:31:26.824 response: 00:31:26.824 { 00:31:26.824 "code": -32602, 00:31:26.824 "message": "Invalid parameters" 00:31:26.824 } 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:26.824 Adding namespace failed - expected result. 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:26.824 test case2: host connect to nvmf target in multiple paths 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:26.824 [2024-12-11 15:12:19.624475] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:26.824 15:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:27.083 15:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:27.083 15:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:27.083 15:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:27.083 15:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:27.083 15:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:28.986 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:28.986 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:28.986 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:28.986 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:29.255 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:29.255 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:29.255 15:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:29.255 [global] 00:31:29.255 thread=1 00:31:29.255 invalidate=1 00:31:29.255 rw=write 00:31:29.255 time_based=1 00:31:29.255 runtime=1 00:31:29.255 ioengine=libaio 00:31:29.255 direct=1 00:31:29.255 bs=4096 00:31:29.255 iodepth=1 00:31:29.255 norandommap=0 00:31:29.255 numjobs=1 00:31:29.255 00:31:29.255 verify_dump=1 00:31:29.255 verify_backlog=512 00:31:29.255 verify_state_save=0 00:31:29.255 do_verify=1 00:31:29.255 verify=crc32c-intel 00:31:29.255 [job0] 00:31:29.255 filename=/dev/nvme0n1 00:31:29.255 Could not set queue depth (nvme0n1) 00:31:29.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:29.513 fio-3.35 00:31:29.513 Starting 1 thread 00:31:30.884 00:31:30.884 job0: (groupid=0, jobs=1): err= 0: pid=3335872: Wed Dec 11 15:12:23 2024 00:31:30.884 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:31:30.884 slat (nsec): min=9056, max=23878, avg=22101.14, stdev=2958.98 00:31:30.884 clat (usec): min=40859, max=42046, avg=41286.82, stdev=474.38 00:31:30.884 lat (usec): min=40881, max=42069, avg=41308.92, stdev=474.71 00:31:30.884 clat percentiles (usec): 00:31:30.884 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:30.884 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:30.884 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:30.884 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:30.884 | 99.99th=[42206] 00:31:30.884 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:31:30.884 slat (usec): min=9, max=29031, avg=67.16, stdev=1282.55 00:31:30.884 clat (usec): min=129, max=464, avg=153.35, stdev=37.83 00:31:30.884 lat (usec): min=139, max=29355, avg=220.51, stdev=1290.65 00:31:30.884 clat percentiles (usec): 00:31:30.884 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 137], 00:31:30.884 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 141], 00:31:30.884 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 241], 95.00th=[ 243], 00:31:30.884 | 99.00th=[ 249], 99.50th=[ 269], 99.90th=[ 465], 99.95th=[ 465], 00:31:30.884 | 99.99th=[ 465] 00:31:30.884 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:30.884 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:30.884 lat (usec) : 250=94.94%, 500=0.94% 00:31:30.884 lat (msec) : 50=4.12% 00:31:30.884 cpu : usr=0.29%, sys=0.49%, ctx=536, majf=0, minf=1 00:31:30.884 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.884 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.884 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.884 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.884 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:30.884 00:31:30.884 Run status group 0 (all jobs): 00:31:30.884 READ: bw=86.0KiB/s (88.1kB/s), 86.0KiB/s-86.0KiB/s (88.1kB/s-88.1kB/s), io=88.0KiB (90.1kB), run=1023-1023msec 00:31:30.884 WRITE: bw=2002KiB/s (2050kB/s), 2002KiB/s-2002KiB/s (2050kB/s-2050kB/s), io=2048KiB (2097kB), run=1023-1023msec 00:31:30.884 00:31:30.884 Disk stats (read/write): 00:31:30.884 nvme0n1: ios=44/512, merge=0/0, ticks=1736/70, in_queue=1806, util=98.70% 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:30.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:30.884 rmmod nvme_tcp 00:31:30.884 rmmod nvme_fabrics 00:31:30.884 rmmod nvme_keyring 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3335256 ']' 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3335256 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3335256 ']' 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3335256 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3335256 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3335256' 00:31:30.884 killing process with pid 3335256 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3335256 00:31:30.884 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3335256 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.143 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.144 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.144 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.144 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.144 15:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.050 00:31:33.050 real 0m13.014s 00:31:33.050 user 0m23.256s 00:31:33.050 sys 0m6.040s 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:33.050 ************************************ 00:31:33.050 END TEST nvmf_nmic 00:31:33.050 ************************************ 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.050 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.310 ************************************ 00:31:33.310 START TEST nvmf_fio_target 00:31:33.310 ************************************ 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:33.310 * Looking for test storage... 00:31:33.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.310 --rc genhtml_branch_coverage=1 00:31:33.310 --rc genhtml_function_coverage=1 00:31:33.310 --rc genhtml_legend=1 00:31:33.310 --rc geninfo_all_blocks=1 00:31:33.310 --rc geninfo_unexecuted_blocks=1 00:31:33.310 00:31:33.310 ' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.310 --rc genhtml_branch_coverage=1 00:31:33.310 --rc genhtml_function_coverage=1 00:31:33.310 --rc genhtml_legend=1 00:31:33.310 --rc geninfo_all_blocks=1 00:31:33.310 --rc geninfo_unexecuted_blocks=1 00:31:33.310 00:31:33.310 ' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.310 --rc genhtml_branch_coverage=1 00:31:33.310 --rc genhtml_function_coverage=1 00:31:33.310 --rc genhtml_legend=1 00:31:33.310 --rc geninfo_all_blocks=1 00:31:33.310 --rc geninfo_unexecuted_blocks=1 00:31:33.310 00:31:33.310 ' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:33.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.310 --rc genhtml_branch_coverage=1 00:31:33.310 --rc genhtml_function_coverage=1 00:31:33.310 --rc genhtml_legend=1 00:31:33.310 --rc geninfo_all_blocks=1 00:31:33.310 --rc geninfo_unexecuted_blocks=1 00:31:33.310 00:31:33.310 ' 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.310 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.311 15:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:39.884 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:39.884 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:39.884 Found net devices under 0000:86:00.0: cvl_0_0 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:39.884 Found net devices under 0000:86:00.1: cvl_0_1 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.884 15:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:31:39.885 00:31:39.885 --- 10.0.0.2 ping statistics --- 00:31:39.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.885 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:31:39.885 00:31:39.885 --- 10.0.0.1 ping statistics --- 00:31:39.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.885 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3339631 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3339631 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3339631 ']' 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.885 [2024-12-11 15:12:32.334177] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.885 [2024-12-11 15:12:32.335092] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:31:39.885 [2024-12-11 15:12:32.335126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.885 [2024-12-11 15:12:32.412362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:39.885 [2024-12-11 15:12:32.453806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.885 [2024-12-11 15:12:32.453840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.885 [2024-12-11 15:12:32.453847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.885 [2024-12-11 15:12:32.453853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.885 [2024-12-11 15:12:32.453858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.885 [2024-12-11 15:12:32.455276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.885 [2024-12-11 15:12:32.455385] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:39.885 [2024-12-11 15:12:32.455495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.885 [2024-12-11 15:12:32.455496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:39.885 [2024-12-11 15:12:32.523644] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.885 [2024-12-11 15:12:32.524280] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.885 [2024-12-11 15:12:32.524681] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:39.885 [2024-12-11 15:12:32.525022] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:39.885 [2024-12-11 15:12:32.525068] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:39.885 [2024-12-11 15:12:32.760146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.885 15:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.144 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:40.144 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.402 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:40.402 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.660 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:40.661 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:40.661 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:40.661 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:40.919 15:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.286 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:41.286 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.286 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:41.286 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:41.544 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:41.544 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:41.802 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:42.059 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:42.059 15:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:42.059 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:42.059 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:42.316 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.573 [2024-12-11 15:12:35.432093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.573 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:42.830 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:42.830 15:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:43.396 15:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:45.295 15:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:45.295 [global] 00:31:45.295 thread=1 00:31:45.295 invalidate=1 00:31:45.295 rw=write 00:31:45.295 time_based=1 00:31:45.295 runtime=1 00:31:45.295 ioengine=libaio 00:31:45.295 direct=1 00:31:45.295 bs=4096 00:31:45.295 iodepth=1 00:31:45.295 norandommap=0 00:31:45.295 numjobs=1 00:31:45.295 00:31:45.295 verify_dump=1 00:31:45.295 verify_backlog=512 00:31:45.295 verify_state_save=0 00:31:45.295 do_verify=1 00:31:45.295 verify=crc32c-intel 00:31:45.295 [job0] 00:31:45.295 filename=/dev/nvme0n1 00:31:45.295 [job1] 00:31:45.295 filename=/dev/nvme0n2 00:31:45.295 [job2] 00:31:45.295 filename=/dev/nvme0n3 00:31:45.295 [job3] 00:31:45.295 filename=/dev/nvme0n4 00:31:45.295 Could not set queue depth (nvme0n1) 00:31:45.295 Could not set queue depth (nvme0n2) 00:31:45.295 Could not set queue depth (nvme0n3) 00:31:45.295 Could not set queue depth (nvme0n4) 00:31:45.553 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.553 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.553 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.553 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:45.553 fio-3.35 00:31:45.553 Starting 4 threads 00:31:46.927 00:31:46.927 job0: (groupid=0, jobs=1): err= 0: pid=3340746: Wed Dec 11 15:12:39 2024 00:31:46.927 read: IOPS=22, BW=88.5KiB/s (90.6kB/s)(92.0KiB/1040msec) 00:31:46.927 slat (nsec): min=9228, max=23332, avg=21688.52, stdev=2821.35 00:31:46.927 clat (usec): min=40807, max=41441, avg=40985.96, stdev=122.86 00:31:46.927 lat (usec): min=40829, max=41451, avg=41007.65, stdev=120.74 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:46.927 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.927 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:46.927 | 99.99th=[41681] 00:31:46.927 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:31:46.927 slat (nsec): min=9254, max=51411, avg=10415.70, stdev=2319.12 00:31:46.927 clat (usec): min=146, max=351, avg=175.44, stdev=20.23 00:31:46.927 lat (usec): min=156, max=402, avg=185.85, stdev=21.32 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 159], 00:31:46.927 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 174], 00:31:46.927 | 70.00th=[ 182], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 210], 00:31:46.927 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 351], 00:31:46.927 | 99.99th=[ 351] 00:31:46.927 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.927 lat (usec) : 250=95.14%, 500=0.56% 00:31:46.927 lat (msec) : 50=4.30% 00:31:46.927 cpu : usr=0.29%, sys=0.38%, ctx=536, majf=0, minf=1 00:31:46.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.927 job1: (groupid=0, jobs=1): err= 0: pid=3340747: Wed Dec 11 15:12:39 2024 00:31:46.927 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:31:46.927 slat (nsec): min=11811, max=24907, avg=22772.82, stdev=3121.73 00:31:46.927 clat (usec): min=40675, max=41916, avg=41001.14, stdev=224.07 00:31:46.927 lat (usec): min=40687, max=41941, avg=41023.92, stdev=224.97 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:46.927 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.927 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:46.927 | 99.99th=[41681] 00:31:46.927 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:46.927 slat (usec): min=11, max=688, avg=14.91, stdev=29.98 00:31:46.927 clat (usec): min=149, max=365, avg=178.58, stdev=21.33 00:31:46.927 lat (usec): min=160, max=862, avg=193.49, stdev=37.05 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[ 155], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:31:46.927 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 178], 00:31:46.927 | 70.00th=[ 182], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 219], 00:31:46.927 | 99.00th=[ 243], 99.50th=[ 281], 99.90th=[ 367], 99.95th=[ 367], 00:31:46.927 | 99.99th=[ 367] 00:31:46.927 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.927 lat (usec) : 250=94.94%, 500=0.94% 00:31:46.927 lat (msec) : 50=4.12% 00:31:46.927 cpu : usr=0.20%, sys=1.20%, ctx=536, majf=0, minf=1 00:31:46.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.927 job2: (groupid=0, jobs=1): err= 0: pid=3340748: Wed Dec 11 15:12:39 2024 00:31:46.927 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:31:46.927 slat (nsec): min=10520, max=29236, avg=23352.67, stdev=3316.71 00:31:46.927 clat (usec): min=40765, max=41878, avg=40997.07, stdev=217.47 00:31:46.927 lat (usec): min=40775, max=41907, avg=41020.42, stdev=219.23 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:46.927 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:46.927 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:46.927 | 99.99th=[41681] 00:31:46.927 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:31:46.927 slat (usec): min=10, max=40776, avg=136.86, stdev=2064.35 00:31:46.927 clat (usec): min=137, max=316, avg=163.99, stdev=27.49 00:31:46.927 lat (usec): min=149, max=41043, avg=300.86, stdev=2070.26 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 143], 20.00th=[ 145], 00:31:46.927 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 157], 00:31:46.927 | 70.00th=[ 176], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 212], 00:31:46.927 | 99.00th=[ 269], 99.50th=[ 285], 99.90th=[ 318], 99.95th=[ 318], 00:31:46.927 | 99.99th=[ 318] 00:31:46.927 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.927 lat (usec) : 250=94.37%, 500=1.69% 00:31:46.927 lat (msec) : 50=3.94% 00:31:46.927 cpu : usr=0.39%, sys=0.98%, ctx=536, majf=0, minf=2 00:31:46.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.927 job3: (groupid=0, jobs=1): err= 0: pid=3340749: Wed Dec 11 15:12:39 2024 00:31:46.927 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:31:46.927 slat (nsec): min=10505, max=28424, avg=22348.77, stdev=3025.23 00:31:46.927 clat (usec): min=40821, max=45037, avg=41205.61, stdev=888.12 00:31:46.927 lat (usec): min=40844, max=45065, avg=41227.96, stdev=889.19 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:46.927 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:46.927 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:46.927 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:31:46.927 | 99.99th=[44827] 00:31:46.927 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:31:46.927 slat (nsec): min=10295, max=65353, avg=11564.87, stdev=2972.37 00:31:46.927 clat (usec): min=163, max=382, avg=189.57, stdev=16.65 00:31:46.927 lat (usec): min=175, max=393, avg=201.14, stdev=17.46 00:31:46.927 clat percentiles (usec): 00:31:46.927 | 1.00th=[ 172], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:31:46.927 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:31:46.927 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 212], 00:31:46.927 | 99.00th=[ 233], 99.50th=[ 297], 99.90th=[ 383], 99.95th=[ 383], 00:31:46.927 | 99.99th=[ 383] 00:31:46.927 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:31:46.927 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:46.927 lat (usec) : 250=94.94%, 500=0.94% 00:31:46.927 lat (msec) : 50=4.12% 00:31:46.927 cpu : usr=0.30%, sys=0.99%, ctx=535, majf=0, minf=1 00:31:46.927 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.927 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.927 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.927 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:46.927 00:31:46.927 Run status group 0 (all jobs): 00:31:46.927 READ: bw=338KiB/s (347kB/s), 82.5KiB/s-88.5KiB/s (84.5kB/s-90.6kB/s), io=352KiB (360kB), run=1004-1040msec 00:31:46.927 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2040KiB/s (2016kB/s-2089kB/s), io=8192KiB (8389kB), run=1004-1040msec 00:31:46.927 00:31:46.927 Disk stats (read/write): 00:31:46.927 nvme0n1: ios=68/512, merge=0/0, ticks=750/90, in_queue=840, util=87.07% 00:31:46.927 nvme0n2: ios=76/512, merge=0/0, ticks=863/86, in_queue=949, util=89.83% 00:31:46.927 nvme0n3: ios=40/512, merge=0/0, ticks=1560/80, in_queue=1640, util=95.31% 00:31:46.927 nvme0n4: ios=75/512, merge=0/0, ticks=810/90, in_queue=900, util=95.48% 00:31:46.927 15:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:46.927 [global] 00:31:46.927 thread=1 00:31:46.927 invalidate=1 00:31:46.927 rw=randwrite 00:31:46.927 time_based=1 00:31:46.927 runtime=1 00:31:46.927 ioengine=libaio 00:31:46.927 direct=1 00:31:46.927 bs=4096 00:31:46.927 iodepth=1 00:31:46.927 norandommap=0 00:31:46.927 numjobs=1 00:31:46.927 00:31:46.927 verify_dump=1 00:31:46.927 verify_backlog=512 00:31:46.927 verify_state_save=0 00:31:46.927 do_verify=1 00:31:46.927 verify=crc32c-intel 00:31:46.927 [job0] 00:31:46.927 filename=/dev/nvme0n1 00:31:46.927 [job1] 00:31:46.927 filename=/dev/nvme0n2 00:31:46.927 [job2] 00:31:46.927 filename=/dev/nvme0n3 00:31:46.927 [job3] 00:31:46.927 filename=/dev/nvme0n4 00:31:46.927 Could not set queue depth (nvme0n1) 00:31:46.927 Could not set queue depth (nvme0n2) 00:31:46.927 Could not set queue depth (nvme0n3) 00:31:46.927 Could not set queue depth (nvme0n4) 00:31:47.185 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.185 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.185 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.185 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:47.185 fio-3.35 00:31:47.185 Starting 4 threads 00:31:48.565 00:31:48.565 job0: (groupid=0, jobs=1): err= 0: pid=3341123: Wed Dec 11 15:12:41 2024 00:31:48.565 read: IOPS=23, BW=92.4KiB/s (94.6kB/s)(96.0KiB/1039msec) 00:31:48.565 slat (nsec): min=9414, max=26299, avg=18179.29, stdev=6157.33 00:31:48.565 clat (usec): min=242, max=41066, avg=39257.32, stdev=8310.83 00:31:48.565 lat (usec): min=264, max=41088, avg=39275.49, stdev=8309.90 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 243], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:31:48.565 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:48.565 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:48.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.565 | 99.99th=[41157] 00:31:48.565 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:31:48.565 slat (nsec): min=10660, max=43571, avg=12120.77, stdev=1931.47 00:31:48.565 clat (usec): min=148, max=259, avg=172.07, stdev=13.17 00:31:48.565 lat (usec): min=159, max=290, avg=184.19, stdev=13.64 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:31:48.565 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:31:48.565 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 198], 00:31:48.565 | 99.00th=[ 215], 99.50th=[ 221], 99.90th=[ 260], 99.95th=[ 260], 00:31:48.565 | 99.99th=[ 260] 00:31:48.565 bw ( KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.565 lat (usec) : 250=95.52%, 500=0.19% 00:31:48.565 lat (msec) : 50=4.29% 00:31:48.565 cpu : usr=0.19%, sys=1.16%, ctx=537, majf=0, minf=1 00:31:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.565 job1: (groupid=0, jobs=1): err= 0: pid=3341124: Wed Dec 11 15:12:41 2024 00:31:48.565 read: IOPS=966, BW=3864KiB/s (3957kB/s)(3868KiB/1001msec) 00:31:48.565 slat (nsec): min=7544, max=33340, avg=8778.45, stdev=2422.56 00:31:48.565 clat (usec): min=207, max=41120, avg=816.30, stdev=4866.64 00:31:48.565 lat (usec): min=215, max=41142, avg=825.08, stdev=4868.34 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 217], 20.00th=[ 219], 00:31:48.565 | 30.00th=[ 221], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:31:48.565 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 249], 00:31:48.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.565 | 99.99th=[41157] 00:31:48.565 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:31:48.565 slat (nsec): min=9934, max=44385, avg=11441.51, stdev=2123.24 00:31:48.565 clat (usec): min=130, max=444, avg=179.64, stdev=30.41 00:31:48.565 lat (usec): min=141, max=456, avg=191.08, stdev=30.52 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:31:48.565 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 172], 60.00th=[ 178], 00:31:48.565 | 70.00th=[ 188], 80.00th=[ 200], 90.00th=[ 225], 95.00th=[ 239], 00:31:48.565 | 99.00th=[ 262], 99.50th=[ 285], 99.90th=[ 404], 99.95th=[ 445], 00:31:48.565 | 99.99th=[ 445] 00:31:48.565 bw ( KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.565 lat (usec) : 250=96.69%, 500=2.61% 00:31:48.565 lat (msec) : 50=0.70% 00:31:48.565 cpu : usr=1.90%, sys=3.00%, ctx=1991, majf=0, minf=2 00:31:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 issued rwts: total=967,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.565 job2: (groupid=0, jobs=1): err= 0: pid=3341125: Wed Dec 11 15:12:41 2024 00:31:48.565 read: IOPS=160, BW=641KiB/s (657kB/s)(656KiB/1023msec) 00:31:48.565 slat (nsec): min=7779, max=24960, avg=9600.79, stdev=2565.93 00:31:48.565 clat (usec): min=196, max=41046, avg=5446.32, stdev=13650.51 00:31:48.565 lat (usec): min=205, max=41057, avg=5455.92, stdev=13652.12 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 219], 00:31:48.565 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:31:48.565 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[41157], 95.00th=[41157], 00:31:48.565 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:48.565 | 99.99th=[41157] 00:31:48.565 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:31:48.565 slat (nsec): min=10746, max=38711, avg=12783.59, stdev=2729.84 00:31:48.565 clat (usec): min=143, max=337, avg=232.58, stdev=23.99 00:31:48.565 lat (usec): min=156, max=376, avg=245.36, stdev=24.14 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 147], 5.00th=[ 163], 10.00th=[ 223], 20.00th=[ 235], 00:31:48.565 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:31:48.565 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:31:48.565 | 99.00th=[ 262], 99.50th=[ 289], 99.90th=[ 338], 99.95th=[ 338], 00:31:48.565 | 99.99th=[ 338] 00:31:48.565 bw ( KiB/s): min= 4096, max= 4096, per=23.09%, avg=4096.00, stdev= 0.00, samples=1 00:31:48.565 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:48.565 lat (usec) : 250=93.49%, 500=3.40% 00:31:48.565 lat (msec) : 50=3.11% 00:31:48.565 cpu : usr=0.68%, sys=1.08%, ctx=676, majf=0, minf=2 00:31:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 issued rwts: total=164,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.565 job3: (groupid=0, jobs=1): err= 0: pid=3341126: Wed Dec 11 15:12:41 2024 00:31:48.565 read: IOPS=2378, BW=9514KiB/s (9743kB/s)(9524KiB/1001msec) 00:31:48.565 slat (nsec): min=7551, max=24300, avg=8787.96, stdev=1443.66 00:31:48.565 clat (usec): min=179, max=424, avg=216.63, stdev=28.10 00:31:48.565 lat (usec): min=188, max=433, avg=225.42, stdev=28.20 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 188], 5.00th=[ 190], 10.00th=[ 192], 20.00th=[ 196], 00:31:48.565 | 30.00th=[ 198], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 217], 00:31:48.565 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 265], 95.00th=[ 285], 00:31:48.565 | 99.00th=[ 297], 99.50th=[ 302], 99.90th=[ 314], 99.95th=[ 322], 00:31:48.565 | 99.99th=[ 424] 00:31:48.565 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:48.565 slat (nsec): min=10707, max=39331, avg=11874.29, stdev=1468.19 00:31:48.565 clat (usec): min=133, max=349, avg=163.44, stdev=30.65 00:31:48.565 lat (usec): min=144, max=388, avg=175.32, stdev=30.83 00:31:48.565 clat percentiles (usec): 00:31:48.565 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143], 00:31:48.565 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 159], 00:31:48.565 | 70.00th=[ 165], 80.00th=[ 178], 90.00th=[ 215], 95.00th=[ 241], 00:31:48.565 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 322], 00:31:48.565 | 99.99th=[ 351] 00:31:48.565 bw ( KiB/s): min=11296, max=11296, per=63.67%, avg=11296.00, stdev= 0.00, samples=1 00:31:48.565 iops : min= 2824, max= 2824, avg=2824.00, stdev= 0.00, samples=1 00:31:48.565 lat (usec) : 250=92.92%, 500=7.08% 00:31:48.565 cpu : usr=4.60%, sys=7.50%, ctx=4942, majf=0, minf=1 00:31:48.565 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.566 issued rwts: total=2381,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.566 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:48.566 00:31:48.566 Run status group 0 (all jobs): 00:31:48.566 READ: bw=13.3MiB/s (13.9MB/s), 92.4KiB/s-9514KiB/s (94.6kB/s-9743kB/s), io=13.8MiB (14.5MB), run=1001-1039msec 00:31:48.566 WRITE: bw=17.3MiB/s (18.2MB/s), 1971KiB/s-9.99MiB/s (2018kB/s-10.5MB/s), io=18.0MiB (18.9MB), run=1001-1039msec 00:31:48.566 00:31:48.566 Disk stats (read/write): 00:31:48.566 nvme0n1: ios=58/512, merge=0/0, ticks=1733/82, in_queue=1815, util=96.69% 00:31:48.566 nvme0n2: ios=561/827, merge=0/0, ticks=690/143, in_queue=833, util=88.02% 00:31:48.566 nvme0n3: ios=216/512, merge=0/0, ticks=759/111, in_queue=870, util=90.84% 00:31:48.566 nvme0n4: ios=2074/2180, merge=0/0, ticks=1361/328, in_queue=1689, util=98.22% 00:31:48.566 15:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:48.566 [global] 00:31:48.566 thread=1 00:31:48.566 invalidate=1 00:31:48.566 rw=write 00:31:48.566 time_based=1 00:31:48.566 runtime=1 00:31:48.566 ioengine=libaio 00:31:48.566 direct=1 00:31:48.566 bs=4096 00:31:48.566 iodepth=128 00:31:48.566 norandommap=0 00:31:48.566 numjobs=1 00:31:48.566 00:31:48.566 verify_dump=1 00:31:48.566 verify_backlog=512 00:31:48.566 verify_state_save=0 00:31:48.566 do_verify=1 00:31:48.566 verify=crc32c-intel 00:31:48.566 [job0] 00:31:48.566 filename=/dev/nvme0n1 00:31:48.566 [job1] 00:31:48.566 filename=/dev/nvme0n2 00:31:48.566 [job2] 00:31:48.566 filename=/dev/nvme0n3 00:31:48.566 [job3] 00:31:48.566 filename=/dev/nvme0n4 00:31:48.566 Could not set queue depth (nvme0n1) 00:31:48.566 Could not set queue depth (nvme0n2) 00:31:48.566 Could not set queue depth (nvme0n3) 00:31:48.566 Could not set queue depth (nvme0n4) 00:31:48.824 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.824 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.824 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.824 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:48.824 fio-3.35 00:31:48.824 Starting 4 threads 00:31:50.216 00:31:50.216 job0: (groupid=0, jobs=1): err= 0: pid=3341494: Wed Dec 11 15:12:42 2024 00:31:50.216 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:31:50.216 slat (nsec): min=1144, max=15520k, avg=99184.92, stdev=762188.01 00:31:50.216 clat (usec): min=2622, max=48304, avg=12954.76, stdev=5625.14 00:31:50.216 lat (usec): min=2633, max=48313, avg=13053.95, stdev=5705.57 00:31:50.216 clat percentiles (usec): 00:31:50.216 | 1.00th=[ 5276], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9634], 00:31:50.216 | 30.00th=[10028], 40.00th=[10421], 50.00th=[11469], 60.00th=[12256], 00:31:50.216 | 70.00th=[13698], 80.00th=[15795], 90.00th=[17695], 95.00th=[22414], 00:31:50.216 | 99.00th=[39060], 99.50th=[43254], 99.90th=[48497], 99.95th=[48497], 00:31:50.216 | 99.99th=[48497] 00:31:50.216 write: IOPS=3720, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1004msec); 0 zone resets 00:31:50.216 slat (nsec): min=1986, max=14410k, avg=158242.54, stdev=948438.54 00:31:50.216 clat (usec): min=698, max=105508, avg=21640.51, stdev=23769.28 00:31:50.216 lat (usec): min=1952, max=105517, avg=21798.75, stdev=23941.57 00:31:50.216 clat percentiles (msec): 00:31:50.216 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 7], 20.00th=[ 9], 00:31:50.216 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 14], 00:31:50.216 | 70.00th=[ 17], 80.00th=[ 29], 90.00th=[ 51], 95.00th=[ 93], 00:31:50.216 | 99.00th=[ 103], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 106], 00:31:50.216 | 99.99th=[ 106] 00:31:50.216 bw ( KiB/s): min=12072, max=16792, per=22.78%, avg=14432.00, stdev=3337.54, samples=2 00:31:50.216 iops : min= 3018, max= 4198, avg=3608.00, stdev=834.39, samples=2 00:31:50.216 lat (usec) : 750=0.03% 00:31:50.216 lat (msec) : 4=0.29%, 10=36.69%, 20=45.88%, 50=12.01%, 100=3.77% 00:31:50.216 lat (msec) : 250=1.34% 00:31:50.216 cpu : usr=2.69%, sys=4.09%, ctx=267, majf=0, minf=1 00:31:50.216 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:31:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.217 issued rwts: total=3584,3735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.217 job1: (groupid=0, jobs=1): err= 0: pid=3341495: Wed Dec 11 15:12:42 2024 00:31:50.217 read: IOPS=3858, BW=15.1MiB/s (15.8MB/s)(15.7MiB/1044msec) 00:31:50.217 slat (nsec): min=1221, max=13213k, avg=103693.73, stdev=764941.61 00:31:50.217 clat (usec): min=974, max=61411, avg=15061.06, stdev=10832.22 00:31:50.217 lat (usec): min=982, max=67172, avg=15164.76, stdev=10885.45 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 1106], 5.00th=[ 5997], 10.00th=[ 8094], 20.00th=[ 9896], 00:31:50.217 | 30.00th=[10552], 40.00th=[11469], 50.00th=[11731], 60.00th=[12256], 00:31:50.217 | 70.00th=[13304], 80.00th=[15401], 90.00th=[29230], 95.00th=[47449], 00:31:50.217 | 99.00th=[51643], 99.50th=[53216], 99.90th=[58459], 99.95th=[61604], 00:31:50.217 | 99.99th=[61604] 00:31:50.217 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:31:50.217 slat (nsec): min=1958, max=19839k, avg=128763.95, stdev=930127.50 00:31:50.217 clat (usec): min=394, max=77169, avg=17474.82, stdev=13881.38 00:31:50.217 lat (usec): min=404, max=77177, avg=17603.59, stdev=13997.98 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 2442], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 9372], 00:31:50.217 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[13042], 00:31:50.217 | 70.00th=[17171], 80.00th=[24773], 90.00th=[34866], 95.00th=[49021], 00:31:50.217 | 99.00th=[67634], 99.50th=[67634], 99.90th=[77071], 99.95th=[77071], 00:31:50.217 | 99.99th=[77071] 00:31:50.217 bw ( KiB/s): min=11344, max=21424, per=25.86%, avg=16384.00, stdev=7127.64, samples=2 00:31:50.217 iops : min= 2836, max= 5356, avg=4096.00, stdev=1781.91, samples=2 00:31:50.217 lat (usec) : 500=0.02%, 1000=0.17% 00:31:50.217 lat (msec) : 2=0.85%, 4=1.37%, 10=20.43%, 20=58.35%, 50=14.71% 00:31:50.217 lat (msec) : 100=4.10% 00:31:50.217 cpu : usr=2.68%, sys=4.41%, ctx=312, majf=0, minf=1 00:31:50.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.217 issued rwts: total=4028,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.217 job2: (groupid=0, jobs=1): err= 0: pid=3341496: Wed Dec 11 15:12:42 2024 00:31:50.217 read: IOPS=4257, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1003msec) 00:31:50.217 slat (nsec): min=1118, max=15455k, avg=100982.73, stdev=790903.01 00:31:50.217 clat (usec): min=1802, max=39749, avg=12522.63, stdev=5136.91 00:31:50.217 lat (usec): min=5213, max=39753, avg=12623.61, stdev=5205.20 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 5342], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8979], 00:31:50.217 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11207], 60.00th=[12387], 00:31:50.217 | 70.00th=[13435], 80.00th=[16057], 90.00th=[19792], 95.00th=[21890], 00:31:50.217 | 99.00th=[32637], 99.50th=[35390], 99.90th=[39584], 99.95th=[39584], 00:31:50.217 | 99.99th=[39584] 00:31:50.217 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:31:50.217 slat (usec): min=2, max=17390, avg=107.02, stdev=792.01 00:31:50.217 clat (usec): min=1254, max=44442, avg=15483.64, stdev=8987.52 00:31:50.217 lat (usec): min=1374, max=44473, avg=15590.66, stdev=9052.78 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 2966], 5.00th=[ 5997], 10.00th=[ 6783], 20.00th=[ 7832], 00:31:50.217 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[11469], 60.00th=[15926], 00:31:50.217 | 70.00th=[19006], 80.00th=[25297], 90.00th=[30278], 95.00th=[31851], 00:31:50.217 | 99.00th=[38011], 99.50th=[39584], 99.90th=[39584], 99.95th=[40633], 00:31:50.217 | 99.99th=[44303] 00:31:50.217 bw ( KiB/s): min=16784, max=20080, per=29.09%, avg=18432.00, stdev=2330.62, samples=2 00:31:50.217 iops : min= 4196, max= 5020, avg=4608.00, stdev=582.66, samples=2 00:31:50.217 lat (msec) : 2=0.30%, 4=0.65%, 10=36.89%, 20=42.86%, 50=19.29% 00:31:50.217 cpu : usr=2.40%, sys=5.09%, ctx=370, majf=0, minf=1 00:31:50.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:31:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.217 issued rwts: total=4270,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.217 job3: (groupid=0, jobs=1): err= 0: pid=3341497: Wed Dec 11 15:12:42 2024 00:31:50.217 read: IOPS=3638, BW=14.2MiB/s (14.9MB/s)(14.8MiB/1042msec) 00:31:50.217 slat (nsec): min=1090, max=15173k, avg=129044.21, stdev=858119.76 00:31:50.217 clat (usec): min=7419, max=56072, avg=18990.61, stdev=10873.01 00:31:50.217 lat (usec): min=7427, max=58165, avg=19119.65, stdev=10942.84 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[11469], 00:31:50.217 | 30.00th=[11994], 40.00th=[12387], 50.00th=[13435], 60.00th=[15270], 00:31:50.217 | 70.00th=[19792], 80.00th=[28705], 90.00th=[36963], 95.00th=[44303], 00:31:50.217 | 99.00th=[52167], 99.50th=[52167], 99.90th=[52691], 99.95th=[52691], 00:31:50.217 | 99.99th=[55837] 00:31:50.217 write: IOPS=3930, BW=15.4MiB/s (16.1MB/s)(16.0MiB/1042msec); 0 zone resets 00:31:50.217 slat (nsec): min=1861, max=18802k, avg=118082.03, stdev=815596.79 00:31:50.217 clat (usec): min=5771, max=58877, avg=14693.54, stdev=7597.83 00:31:50.217 lat (usec): min=5785, max=58910, avg=14811.62, stdev=7676.19 00:31:50.217 clat percentiles (usec): 00:31:50.217 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10945], 00:31:50.217 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11994], 60.00th=[12125], 00:31:50.217 | 70.00th=[13042], 80.00th=[15401], 90.00th=[26084], 95.00th=[32113], 00:31:50.217 | 99.00th=[47449], 99.50th=[47973], 99.90th=[47973], 99.95th=[51119], 00:31:50.217 | 99.99th=[58983] 00:31:50.217 bw ( KiB/s): min=16224, max=16544, per=25.86%, avg=16384.00, stdev=226.27, samples=2 00:31:50.217 iops : min= 4056, max= 4136, avg=4096.00, stdev=56.57, samples=2 00:31:50.217 lat (msec) : 10=8.51%, 20=70.64%, 50=20.11%, 100=0.75% 00:31:50.217 cpu : usr=2.88%, sys=5.48%, ctx=340, majf=0, minf=1 00:31:50.217 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:50.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:50.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:50.217 issued rwts: total=3791,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:50.217 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:50.217 00:31:50.217 Run status group 0 (all jobs): 00:31:50.217 READ: bw=58.6MiB/s (61.5MB/s), 13.9MiB/s-16.6MiB/s (14.6MB/s-17.4MB/s), io=61.2MiB (64.2MB), run=1003-1044msec 00:31:50.217 WRITE: bw=61.9MiB/s (64.9MB/s), 14.5MiB/s-17.9MiB/s (15.2MB/s-18.8MB/s), io=64.6MiB (67.7MB), run=1003-1044msec 00:31:50.217 00:31:50.217 Disk stats (read/write): 00:31:50.217 nvme0n1: ios=2643/3072, merge=0/0, ticks=34494/68353, in_queue=102847, util=96.29% 00:31:50.217 nvme0n2: ios=3619/3659, merge=0/0, ticks=30812/40475, in_queue=71287, util=96.95% 00:31:50.217 nvme0n3: ios=3626/4011, merge=0/0, ticks=35872/46224, in_queue=82096, util=97.19% 00:31:50.217 nvme0n4: ios=3072/3452, merge=0/0, ticks=20040/20045, in_queue=40085, util=89.72% 00:31:50.217 15:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:50.217 [global] 00:31:50.217 thread=1 00:31:50.217 invalidate=1 00:31:50.217 rw=randwrite 00:31:50.217 time_based=1 00:31:50.217 runtime=1 00:31:50.217 ioengine=libaio 00:31:50.217 direct=1 00:31:50.217 bs=4096 00:31:50.217 iodepth=128 00:31:50.217 norandommap=0 00:31:50.217 numjobs=1 00:31:50.217 00:31:50.217 verify_dump=1 00:31:50.217 verify_backlog=512 00:31:50.217 verify_state_save=0 00:31:50.217 do_verify=1 00:31:50.217 verify=crc32c-intel 00:31:50.217 [job0] 00:31:50.217 filename=/dev/nvme0n1 00:31:50.217 [job1] 00:31:50.217 filename=/dev/nvme0n2 00:31:50.217 [job2] 00:31:50.217 filename=/dev/nvme0n3 00:31:50.217 [job3] 00:31:50.217 filename=/dev/nvme0n4 00:31:50.217 Could not set queue depth (nvme0n1) 00:31:50.217 Could not set queue depth (nvme0n2) 00:31:50.217 Could not set queue depth (nvme0n3) 00:31:50.217 Could not set queue depth (nvme0n4) 00:31:50.477 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.477 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.477 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.477 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:50.477 fio-3.35 00:31:50.477 Starting 4 threads 00:31:51.846 00:31:51.846 job0: (groupid=0, jobs=1): err= 0: pid=3341868: Wed Dec 11 15:12:44 2024 00:31:51.846 read: IOPS=2762, BW=10.8MiB/s (11.3MB/s)(10.9MiB/1007msec) 00:31:51.846 slat (nsec): min=1451, max=20951k, avg=136444.27, stdev=928762.97 00:31:51.846 clat (usec): min=3499, max=51494, avg=16479.96, stdev=8133.11 00:31:51.846 lat (usec): min=3508, max=51501, avg=16616.41, stdev=8192.77 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 4293], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[10814], 00:31:51.846 | 30.00th=[11207], 40.00th=[11731], 50.00th=[13566], 60.00th=[16188], 00:31:51.846 | 70.00th=[20055], 80.00th=[23200], 90.00th=[26084], 95.00th=[29230], 00:31:51.846 | 99.00th=[46924], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:31:51.846 | 99.99th=[51643] 00:31:51.846 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:31:51.846 slat (usec): min=2, max=27933, avg=197.57, stdev=1142.98 00:31:51.846 clat (usec): min=2694, max=86882, avg=26607.01, stdev=15132.22 00:31:51.846 lat (usec): min=2706, max=86889, avg=26804.57, stdev=15206.24 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 4146], 5.00th=[13042], 10.00th=[17171], 20.00th=[17695], 00:31:51.846 | 30.00th=[20055], 40.00th=[21103], 50.00th=[21365], 60.00th=[21890], 00:31:51.846 | 70.00th=[23200], 80.00th=[30540], 90.00th=[51119], 95.00th=[58459], 00:31:51.846 | 99.00th=[84411], 99.50th=[85459], 99.90th=[86508], 99.95th=[86508], 00:31:51.846 | 99.99th=[86508] 00:31:51.846 bw ( KiB/s): min=11006, max=13592, per=17.13%, avg=12299.00, stdev=1828.58, samples=2 00:31:51.846 iops : min= 2751, max= 3398, avg=3074.50, stdev=457.50, samples=2 00:31:51.846 lat (msec) : 4=0.67%, 10=10.03%, 20=38.06%, 50=45.73%, 100=5.52% 00:31:51.846 cpu : usr=1.49%, sys=3.48%, ctx=382, majf=0, minf=1 00:31:51.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:31:51.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.846 issued rwts: total=2782,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.846 job1: (groupid=0, jobs=1): err= 0: pid=3341872: Wed Dec 11 15:12:44 2024 00:31:51.846 read: IOPS=5526, BW=21.6MiB/s (22.6MB/s)(22.0MiB/1019msec) 00:31:51.846 slat (nsec): min=1411, max=14035k, avg=84521.28, stdev=695284.10 00:31:51.846 clat (usec): min=4725, max=26435, avg=11037.47, stdev=2935.37 00:31:51.846 lat (usec): min=4734, max=26731, avg=11122.00, stdev=2993.73 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 5538], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9110], 00:31:51.846 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10814], 00:31:51.846 | 70.00th=[11600], 80.00th=[13042], 90.00th=[15139], 95.00th=[17171], 00:31:51.846 | 99.00th=[20841], 99.50th=[23200], 99.90th=[26346], 99.95th=[26346], 00:31:51.846 | 99.99th=[26346] 00:31:51.846 write: IOPS=5971, BW=23.3MiB/s (24.5MB/s)(23.8MiB/1019msec); 0 zone resets 00:31:51.846 slat (usec): min=2, max=21825, avg=82.62, stdev=665.12 00:31:51.846 clat (usec): min=218, max=39073, avg=11000.95, stdev=4339.38 00:31:51.846 lat (usec): min=2165, max=40053, avg=11083.56, stdev=4376.46 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 4817], 5.00th=[ 6128], 10.00th=[ 6783], 20.00th=[ 8455], 00:31:51.846 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:31:51.846 | 70.00th=[11076], 80.00th=[12649], 90.00th=[16909], 95.00th=[19006], 00:31:51.846 | 99.00th=[30802], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:31:51.846 | 99.99th=[39060] 00:31:51.846 bw ( KiB/s): min=22832, max=24873, per=33.22%, avg=23852.50, stdev=1443.20, samples=2 00:31:51.846 iops : min= 5708, max= 6218, avg=5963.00, stdev=360.62, samples=2 00:31:51.846 lat (usec) : 250=0.01% 00:31:51.846 lat (msec) : 4=0.22%, 10=44.76%, 20=52.68%, 50=2.32% 00:31:51.846 cpu : usr=5.30%, sys=6.78%, ctx=299, majf=0, minf=1 00:31:51.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:51.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.846 issued rwts: total=5632,6085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.846 job2: (groupid=0, jobs=1): err= 0: pid=3341879: Wed Dec 11 15:12:44 2024 00:31:51.846 read: IOPS=3074, BW=12.0MiB/s (12.6MB/s)(12.2MiB/1014msec) 00:31:51.846 slat (nsec): min=1414, max=16035k, avg=166556.39, stdev=1077221.24 00:31:51.846 clat (usec): min=3697, max=65426, avg=18004.35, stdev=10707.81 00:31:51.846 lat (usec): min=3708, max=65429, avg=18170.91, stdev=10804.00 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 4686], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11994], 00:31:51.846 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[15401], 00:31:51.846 | 70.00th=[20055], 80.00th=[24773], 90.00th=[31065], 95.00th=[39584], 00:31:51.846 | 99.00th=[62653], 99.50th=[64226], 99.90th=[65274], 99.95th=[65274], 00:31:51.846 | 99.99th=[65274] 00:31:51.846 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1014msec); 0 zone resets 00:31:51.846 slat (usec): min=2, max=16220, avg=129.88, stdev=733.59 00:31:51.846 clat (usec): min=1520, max=65426, avg=20180.96, stdev=9314.98 00:31:51.846 lat (usec): min=1535, max=65430, avg=20310.84, stdev=9355.55 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 3916], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[13173], 00:31:51.846 | 30.00th=[16909], 40.00th=[17957], 50.00th=[20317], 60.00th=[21103], 00:31:51.846 | 70.00th=[21627], 80.00th=[22414], 90.00th=[27657], 95.00th=[43254], 00:31:51.846 | 99.00th=[55837], 99.50th=[55837], 99.90th=[64750], 99.95th=[65274], 00:31:51.846 | 99.99th=[65274] 00:31:51.846 bw ( KiB/s): min=13696, max=14320, per=19.51%, avg=14008.00, stdev=441.23, samples=2 00:31:51.846 iops : min= 3424, max= 3580, avg=3502.00, stdev=110.31, samples=2 00:31:51.846 lat (msec) : 2=0.03%, 4=0.85%, 10=9.88%, 20=47.27%, 50=38.79% 00:31:51.846 lat (msec) : 100=3.18% 00:31:51.846 cpu : usr=2.57%, sys=3.75%, ctx=392, majf=0, minf=1 00:31:51.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:31:51.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.846 issued rwts: total=3118,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.846 job3: (groupid=0, jobs=1): err= 0: pid=3341882: Wed Dec 11 15:12:44 2024 00:31:51.846 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:31:51.846 slat (nsec): min=1215, max=19701k, avg=99378.05, stdev=834664.91 00:31:51.846 clat (usec): min=2198, max=50770, avg=12926.23, stdev=6367.41 00:31:51.846 lat (usec): min=2218, max=50795, avg=13025.61, stdev=6439.70 00:31:51.846 clat percentiles (usec): 00:31:51.846 | 1.00th=[ 3654], 5.00th=[ 6259], 10.00th=[ 7635], 20.00th=[ 9765], 00:31:51.846 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[11731], 00:31:51.846 | 70.00th=[12387], 80.00th=[14877], 90.00th=[20317], 95.00th=[28705], 00:31:51.846 | 99.00th=[35390], 99.50th=[43779], 99.90th=[44303], 99.95th=[44827], 00:31:51.847 | 99.99th=[50594] 00:31:51.847 write: IOPS=5506, BW=21.5MiB/s (22.6MB/s)(21.7MiB/1008msec); 0 zone resets 00:31:51.847 slat (nsec): min=1890, max=9725.6k, avg=76133.98, stdev=425101.44 00:31:51.847 clat (usec): min=732, max=28421, avg=11058.86, stdev=3576.75 00:31:51.847 lat (usec): min=904, max=28428, avg=11134.99, stdev=3606.31 00:31:51.847 clat percentiles (usec): 00:31:51.847 | 1.00th=[ 2769], 5.00th=[ 6063], 10.00th=[ 6915], 20.00th=[ 8979], 00:31:51.847 | 30.00th=[ 9765], 40.00th=[10552], 50.00th=[11076], 60.00th=[11338], 00:31:51.847 | 70.00th=[11600], 80.00th=[11994], 90.00th=[15008], 95.00th=[19006], 00:31:51.847 | 99.00th=[24249], 99.50th=[25297], 99.90th=[27395], 99.95th=[27395], 00:31:51.847 | 99.99th=[28443] 00:31:51.847 bw ( KiB/s): min=18816, max=24576, per=30.22%, avg=21696.00, stdev=4072.94, samples=2 00:31:51.847 iops : min= 4704, max= 6144, avg=5424.00, stdev=1018.23, samples=2 00:31:51.847 lat (usec) : 750=0.01%, 1000=0.19% 00:31:51.847 lat (msec) : 2=0.16%, 4=1.38%, 10=27.49%, 20=64.45%, 50=6.33% 00:31:51.847 lat (msec) : 100=0.01% 00:31:51.847 cpu : usr=2.98%, sys=4.97%, ctx=557, majf=0, minf=1 00:31:51.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:51.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:51.847 issued rwts: total=5120,5551,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:51.847 00:31:51.847 Run status group 0 (all jobs): 00:31:51.847 READ: bw=63.8MiB/s (66.9MB/s), 10.8MiB/s-21.6MiB/s (11.3MB/s-22.6MB/s), io=65.0MiB (68.2MB), run=1007-1019msec 00:31:51.847 WRITE: bw=70.1MiB/s (73.5MB/s), 11.9MiB/s-23.3MiB/s (12.5MB/s-24.5MB/s), io=71.5MiB (74.9MB), run=1007-1019msec 00:31:51.847 00:31:51.847 Disk stats (read/write): 00:31:51.847 nvme0n1: ios=2067/2559, merge=0/0, ticks=37447/69015, in_queue=106462, util=89.88% 00:31:51.847 nvme0n2: ios=4963/5120, merge=0/0, ticks=53658/51058, in_queue=104716, util=93.91% 00:31:51.847 nvme0n3: ios=2624/2975, merge=0/0, ticks=47342/58206, in_queue=105548, util=99.06% 00:31:51.847 nvme0n4: ios=4756/5120, merge=0/0, ticks=43657/41471, in_queue=85128, util=98.11% 00:31:51.847 15:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:51.847 15:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3342102 00:31:51.847 15:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:51.847 15:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:51.847 [global] 00:31:51.847 thread=1 00:31:51.847 invalidate=1 00:31:51.847 rw=read 00:31:51.847 time_based=1 00:31:51.847 runtime=10 00:31:51.847 ioengine=libaio 00:31:51.847 direct=1 00:31:51.847 bs=4096 00:31:51.847 iodepth=1 00:31:51.847 norandommap=1 00:31:51.847 numjobs=1 00:31:51.847 00:31:51.847 [job0] 00:31:51.847 filename=/dev/nvme0n1 00:31:51.847 [job1] 00:31:51.847 filename=/dev/nvme0n2 00:31:51.847 [job2] 00:31:51.847 filename=/dev/nvme0n3 00:31:51.847 [job3] 00:31:51.847 filename=/dev/nvme0n4 00:31:51.847 Could not set queue depth (nvme0n1) 00:31:51.847 Could not set queue depth (nvme0n2) 00:31:51.847 Could not set queue depth (nvme0n3) 00:31:51.847 Could not set queue depth (nvme0n4) 00:31:51.847 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.847 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.847 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.847 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:51.847 fio-3.35 00:31:51.847 Starting 4 threads 00:31:55.122 15:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:55.122 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=4575232, buflen=4096 00:31:55.122 fio: pid=3342289, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.122 15:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:55.122 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=327680, buflen=4096 00:31:55.122 fio: pid=3342283, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.122 15:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.122 15:12:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:55.122 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=53452800, buflen=4096 00:31:55.122 fio: pid=3342256, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.379 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.379 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:55.379 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=59449344, buflen=4096 00:31:55.379 fio: pid=3342266, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:55.379 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.379 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:55.379 00:31:55.379 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3342256: Wed Dec 11 15:12:48 2024 00:31:55.379 read: IOPS=4137, BW=16.2MiB/s (16.9MB/s)(51.0MiB/3154msec) 00:31:55.379 slat (usec): min=3, max=26463, avg=10.45, stdev=264.74 00:31:55.379 clat (usec): min=184, max=20892, avg=228.45, stdev=191.98 00:31:55.379 lat (usec): min=191, max=26797, avg=238.90, stdev=328.15 00:31:55.379 clat percentiles (usec): 00:31:55.379 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:31:55.379 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 217], 60.00th=[ 221], 00:31:55.379 | 70.00th=[ 229], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:31:55.379 | 99.00th=[ 310], 99.50th=[ 355], 99.90th=[ 412], 99.95th=[ 619], 00:31:55.379 | 99.99th=[ 4490] 00:31:55.379 bw ( KiB/s): min=15064, max=18096, per=49.04%, avg=16760.83, stdev=1400.60, samples=6 00:31:55.379 iops : min= 3766, max= 4524, avg=4190.17, stdev=350.21, samples=6 00:31:55.379 lat (usec) : 250=84.58%, 500=15.36%, 750=0.02% 00:31:55.379 lat (msec) : 2=0.01%, 4=0.01%, 10=0.02%, 50=0.01% 00:31:55.379 cpu : usr=1.05%, sys=3.71%, ctx=13054, majf=0, minf=2 00:31:55.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 issued rwts: total=13051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.379 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3342266: Wed Dec 11 15:12:48 2024 00:31:55.379 read: IOPS=4312, BW=16.8MiB/s (17.7MB/s)(56.7MiB/3366msec) 00:31:55.379 slat (usec): min=3, max=30122, avg=13.59, stdev=348.21 00:31:55.379 clat (usec): min=177, max=508, avg=215.67, stdev=16.79 00:31:55.379 lat (usec): min=185, max=30415, avg=229.27, stdev=350.07 00:31:55.379 clat percentiles (usec): 00:31:55.379 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:31:55.379 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 217], 00:31:55.379 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 229], 95.00th=[ 243], 00:31:55.379 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 351], 99.95th=[ 367], 00:31:55.379 | 99.99th=[ 510] 00:31:55.379 bw ( KiB/s): min=17296, max=18264, per=51.76%, avg=17690.67, stdev=399.83, samples=6 00:31:55.379 iops : min= 4324, max= 4566, avg=4422.67, stdev=99.96, samples=6 00:31:55.379 lat (usec) : 250=96.57%, 500=3.41%, 750=0.01% 00:31:55.379 cpu : usr=1.34%, sys=3.54%, ctx=14522, majf=0, minf=2 00:31:55.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 issued rwts: total=14515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.379 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3342283: Wed Dec 11 15:12:48 2024 00:31:55.379 read: IOPS=27, BW=109KiB/s (112kB/s)(320KiB/2937msec) 00:31:55.379 slat (nsec): min=10662, max=43432, avg=24317.65, stdev=4916.12 00:31:55.379 clat (usec): min=293, max=41970, avg=36412.88, stdev=12915.84 00:31:55.379 lat (usec): min=320, max=41995, avg=36437.19, stdev=12915.57 00:31:55.379 clat percentiles (usec): 00:31:55.379 | 1.00th=[ 293], 5.00th=[ 351], 10.00th=[ 412], 20.00th=[40633], 00:31:55.379 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:55.379 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:55.379 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:55.379 | 99.99th=[42206] 00:31:55.379 bw ( KiB/s): min= 96, max= 136, per=0.32%, avg=108.80, stdev=16.59, samples=5 00:31:55.379 iops : min= 24, max= 34, avg=27.20, stdev= 4.15, samples=5 00:31:55.379 lat (usec) : 500=11.11% 00:31:55.379 lat (msec) : 50=87.65% 00:31:55.379 cpu : usr=0.14%, sys=0.00%, ctx=81, majf=0, minf=1 00:31:55.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 issued rwts: total=81,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.379 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3342289: Wed Dec 11 15:12:48 2024 00:31:55.379 read: IOPS=411, BW=1643KiB/s (1682kB/s)(4468KiB/2720msec) 00:31:55.379 slat (nsec): min=5596, max=38177, avg=9180.70, stdev=2330.77 00:31:55.379 clat (usec): min=194, max=42002, avg=2404.75, stdev=9040.25 00:31:55.379 lat (usec): min=203, max=42013, avg=2413.93, stdev=9040.95 00:31:55.379 clat percentiles (usec): 00:31:55.379 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 233], 00:31:55.379 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 273], 00:31:55.379 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 359], 95.00th=[40633], 00:31:55.379 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:31:55.379 | 99.99th=[42206] 00:31:55.379 bw ( KiB/s): min= 96, max= 3976, per=5.16%, avg=1763.20, stdev=1663.01, samples=5 00:31:55.379 iops : min= 24, max= 994, avg=440.80, stdev=415.75, samples=5 00:31:55.379 lat (usec) : 250=45.17%, 500=49.02%, 750=0.27% 00:31:55.379 lat (msec) : 4=0.09%, 20=0.18%, 50=5.19% 00:31:55.379 cpu : usr=0.00%, sys=0.55%, ctx=1120, majf=0, minf=2 00:31:55.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:55.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.379 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:55.379 00:31:55.379 Run status group 0 (all jobs): 00:31:55.379 READ: bw=33.4MiB/s (35.0MB/s), 109KiB/s-16.8MiB/s (112kB/s-17.7MB/s), io=112MiB (118MB), run=2720-3366msec 00:31:55.379 00:31:55.379 Disk stats (read/write): 00:31:55.379 nvme0n1: ios=12950/0, merge=0/0, ticks=2862/0, in_queue=2862, util=94.45% 00:31:55.379 nvme0n2: ios=14515/0, merge=0/0, ticks=3069/0, in_queue=3069, util=93.71% 00:31:55.379 nvme0n3: ios=78/0, merge=0/0, ticks=2834/0, in_queue=2834, util=96.52% 00:31:55.379 nvme0n4: ios=1141/0, merge=0/0, ticks=2802/0, in_queue=2802, util=100.00% 00:31:55.635 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.635 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:55.892 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:55.892 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:56.148 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:56.149 15:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:56.149 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:56.149 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:56.406 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:56.406 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3342102 00:31:56.406 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:56.406 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:56.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:56.663 nvmf hotplug test: fio failed as expected 00:31:56.663 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:56.921 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:56.921 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:56.922 rmmod nvme_tcp 00:31:56.922 rmmod nvme_fabrics 00:31:56.922 rmmod nvme_keyring 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3339631 ']' 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3339631 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3339631 ']' 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3339631 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3339631 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3339631' 00:31:56.922 killing process with pid 3339631 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3339631 00:31:56.922 15:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3339631 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:57.181 15:12:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.086 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:59.346 00:31:59.346 real 0m26.016s 00:31:59.346 user 1m30.408s 00:31:59.346 sys 0m11.115s 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:59.346 ************************************ 00:31:59.346 END TEST nvmf_fio_target 00:31:59.346 ************************************ 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.346 ************************************ 00:31:59.346 START TEST nvmf_bdevio 00:31:59.346 ************************************ 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:59.346 * Looking for test storage... 00:31:59.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.346 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.605 --rc genhtml_branch_coverage=1 00:31:59.605 --rc genhtml_function_coverage=1 00:31:59.605 --rc genhtml_legend=1 00:31:59.605 --rc geninfo_all_blocks=1 00:31:59.605 --rc geninfo_unexecuted_blocks=1 00:31:59.605 00:31:59.605 ' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.605 --rc genhtml_branch_coverage=1 00:31:59.605 --rc genhtml_function_coverage=1 00:31:59.605 --rc genhtml_legend=1 00:31:59.605 --rc geninfo_all_blocks=1 00:31:59.605 --rc geninfo_unexecuted_blocks=1 00:31:59.605 00:31:59.605 ' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.605 --rc genhtml_branch_coverage=1 00:31:59.605 --rc genhtml_function_coverage=1 00:31:59.605 --rc genhtml_legend=1 00:31:59.605 --rc geninfo_all_blocks=1 00:31:59.605 --rc geninfo_unexecuted_blocks=1 00:31:59.605 00:31:59.605 ' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:59.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.605 --rc genhtml_branch_coverage=1 00:31:59.605 --rc genhtml_function_coverage=1 00:31:59.605 --rc genhtml_legend=1 00:31:59.605 --rc geninfo_all_blocks=1 00:31:59.605 --rc geninfo_unexecuted_blocks=1 00:31:59.605 00:31:59.605 ' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.605 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:31:59.606 15:12:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:06.175 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.176 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.176 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:06.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:32:06.176 00:32:06.176 --- 10.0.0.2 ping statistics --- 00:32:06.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.176 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:06.176 00:32:06.176 --- 10.0.0.1 ping statistics --- 00:32:06.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.176 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:06.176 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3346694 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3346694 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3346694 ']' 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 [2024-12-11 15:12:58.422727] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.177 [2024-12-11 15:12:58.423630] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:32:06.177 [2024-12-11 15:12:58.423663] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.177 [2024-12-11 15:12:58.503972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.177 [2024-12-11 15:12:58.545583] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.177 [2024-12-11 15:12:58.545620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.177 [2024-12-11 15:12:58.545628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.177 [2024-12-11 15:12:58.545634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.177 [2024-12-11 15:12:58.545639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.177 [2024-12-11 15:12:58.547139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:32:06.177 [2024-12-11 15:12:58.547246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:32:06.177 [2024-12-11 15:12:58.547350] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:06.177 [2024-12-11 15:12:58.547352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:32:06.177 [2024-12-11 15:12:58.616622] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.177 [2024-12-11 15:12:58.616998] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.177 [2024-12-11 15:12:58.617640] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:06.177 [2024-12-11 15:12:58.617976] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:06.177 [2024-12-11 15:12:58.618018] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 [2024-12-11 15:12:58.684041] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 Malloc0 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.177 [2024-12-11 15:12:58.772351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.177 { 00:32:06.177 "params": { 00:32:06.177 "name": "Nvme$subsystem", 00:32:06.177 "trtype": "$TEST_TRANSPORT", 00:32:06.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.177 "adrfam": "ipv4", 00:32:06.177 "trsvcid": "$NVMF_PORT", 00:32:06.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.177 "hdgst": ${hdgst:-false}, 00:32:06.177 "ddgst": ${ddgst:-false} 00:32:06.177 }, 00:32:06.177 "method": "bdev_nvme_attach_controller" 00:32:06.177 } 00:32:06.177 EOF 00:32:06.177 )") 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:06.177 15:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.177 "params": { 00:32:06.177 "name": "Nvme1", 00:32:06.177 "trtype": "tcp", 00:32:06.177 "traddr": "10.0.0.2", 00:32:06.177 "adrfam": "ipv4", 00:32:06.177 "trsvcid": "4420", 00:32:06.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:06.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:06.177 "hdgst": false, 00:32:06.177 "ddgst": false 00:32:06.177 }, 00:32:06.177 "method": "bdev_nvme_attach_controller" 00:32:06.177 }' 00:32:06.177 [2024-12-11 15:12:58.825863] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:32:06.177 [2024-12-11 15:12:58.825917] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346738 ] 00:32:06.177 [2024-12-11 15:12:58.901469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:06.177 [2024-12-11 15:12:58.944783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.177 [2024-12-11 15:12:58.944891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.177 [2024-12-11 15:12:58.944892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.177 I/O targets: 00:32:06.177 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:06.177 00:32:06.177 00:32:06.177 CUnit - A unit testing framework for C - Version 2.1-3 00:32:06.177 http://cunit.sourceforge.net/ 00:32:06.177 00:32:06.177 00:32:06.177 Suite: bdevio tests on: Nvme1n1 00:32:06.178 Test: blockdev write read block ...passed 00:32:06.435 Test: blockdev write zeroes read block ...passed 00:32:06.435 Test: blockdev write zeroes read no split ...passed 00:32:06.435 Test: blockdev write zeroes read split ...passed 00:32:06.435 Test: blockdev write zeroes read split partial ...passed 00:32:06.435 Test: blockdev reset ...[2024-12-11 15:12:59.329739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:06.435 [2024-12-11 15:12:59.329802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8e050 (9): Bad file descriptor 00:32:06.435 [2024-12-11 15:12:59.383341] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:06.435 passed 00:32:06.435 Test: blockdev write read 8 blocks ...passed 00:32:06.435 Test: blockdev write read size > 128k ...passed 00:32:06.435 Test: blockdev write read invalid size ...passed 00:32:06.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:06.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:06.435 Test: blockdev write read max offset ...passed 00:32:06.693 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:06.693 Test: blockdev writev readv 8 blocks ...passed 00:32:06.693 Test: blockdev writev readv 30 x 1block ...passed 00:32:06.693 Test: blockdev writev readv block ...passed 00:32:06.693 Test: blockdev writev readv size > 128k ...passed 00:32:06.693 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:06.693 Test: blockdev comparev and writev ...[2024-12-11 15:12:59.594085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.594129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.594442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.594468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.594772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.594794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.594802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.595083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.595094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.595106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:06.693 [2024-12-11 15:12:59.595112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:06.693 passed 00:32:06.693 Test: blockdev nvme passthru rw ...passed 00:32:06.693 Test: blockdev nvme passthru vendor specific ...[2024-12-11 15:12:59.677447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.693 [2024-12-11 15:12:59.677468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.677579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.693 [2024-12-11 15:12:59.677590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.677695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.693 [2024-12-11 15:12:59.677704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.693 [2024-12-11 15:12:59.677811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:06.693 [2024-12-11 15:12:59.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:06.693 passed 00:32:06.693 Test: blockdev nvme admin passthru ...passed 00:32:06.693 Test: blockdev copy ...passed 00:32:06.693 00:32:06.693 Run Summary: Type Total Ran Passed Failed Inactive 00:32:06.693 suites 1 1 n/a 0 0 00:32:06.693 tests 23 23 23 0 0 00:32:06.693 asserts 152 152 152 0 n/a 00:32:06.693 00:32:06.693 Elapsed time = 1.192 seconds 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.951 rmmod nvme_tcp 00:32:06.951 rmmod nvme_fabrics 00:32:06.951 rmmod nvme_keyring 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3346694 ']' 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3346694 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3346694 ']' 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3346694 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3346694 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3346694' 00:32:06.951 killing process with pid 3346694 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3346694 00:32:06.951 15:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3346694 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.211 15:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.747 00:32:09.747 real 0m10.040s 00:32:09.747 user 0m8.871s 00:32:09.747 sys 0m5.250s 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:09.747 ************************************ 00:32:09.747 END TEST nvmf_bdevio 00:32:09.747 ************************************ 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:09.747 00:32:09.747 real 4m32.498s 00:32:09.747 user 9m5.805s 00:32:09.747 sys 1m51.247s 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.747 15:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:09.747 ************************************ 00:32:09.747 END TEST nvmf_target_core_interrupt_mode 00:32:09.747 ************************************ 00:32:09.747 15:13:02 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:09.747 15:13:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.747 15:13:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.747 15:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.747 ************************************ 00:32:09.747 START TEST nvmf_interrupt 00:32:09.747 ************************************ 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:09.747 * Looking for test storage... 00:32:09.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.747 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.748 --rc genhtml_branch_coverage=1 00:32:09.748 --rc genhtml_function_coverage=1 00:32:09.748 --rc genhtml_legend=1 00:32:09.748 --rc geninfo_all_blocks=1 00:32:09.748 --rc geninfo_unexecuted_blocks=1 00:32:09.748 00:32:09.748 ' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.748 --rc genhtml_branch_coverage=1 00:32:09.748 --rc genhtml_function_coverage=1 00:32:09.748 --rc genhtml_legend=1 00:32:09.748 --rc geninfo_all_blocks=1 00:32:09.748 --rc geninfo_unexecuted_blocks=1 00:32:09.748 00:32:09.748 ' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.748 --rc genhtml_branch_coverage=1 00:32:09.748 --rc genhtml_function_coverage=1 00:32:09.748 --rc genhtml_legend=1 00:32:09.748 --rc geninfo_all_blocks=1 00:32:09.748 --rc geninfo_unexecuted_blocks=1 00:32:09.748 00:32:09.748 ' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:09.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.748 --rc genhtml_branch_coverage=1 00:32:09.748 --rc genhtml_function_coverage=1 00:32:09.748 --rc genhtml_legend=1 00:32:09.748 --rc geninfo_all_blocks=1 00:32:09.748 --rc geninfo_unexecuted_blocks=1 00:32:09.748 00:32:09.748 ' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/interrupt/common.sh 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.748 15:13:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:16.317 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:16.317 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:16.317 Found net devices under 0000:86:00.0: cvl_0_0 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:16.317 Found net devices under 0000:86:00.1: cvl_0_1 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:16.317 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:16.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:16.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:32:16.318 00:32:16.318 --- 10.0.0.2 ping statistics --- 00:32:16.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.318 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:16.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:16.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:32:16.318 00:32:16.318 --- 10.0.0.1 ping statistics --- 00:32:16.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:16.318 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3350395 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3350395 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3350395 ']' 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 [2024-12-11 15:13:08.489973] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:16.318 [2024-12-11 15:13:08.490917] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:32:16.318 [2024-12-11 15:13:08.490952] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.318 [2024-12-11 15:13:08.572649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:16.318 [2024-12-11 15:13:08.613055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.318 [2024-12-11 15:13:08.613091] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.318 [2024-12-11 15:13:08.613099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.318 [2024-12-11 15:13:08.613106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.318 [2024-12-11 15:13:08.613115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.318 [2024-12-11 15:13:08.614343] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:16.318 [2024-12-11 15:13:08.614345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.318 [2024-12-11 15:13:08.683648] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:16.318 [2024-12-11 15:13:08.684225] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:16.318 [2024-12-11 15:13:08.684374] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:16.318 5000+0 records in 00:32:16.318 5000+0 records out 00:32:16.318 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0182121 s, 562 MB/s 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 AIO0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 [2024-12-11 15:13:08.827097] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:16.318 [2024-12-11 15:13:08.859367] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3350395 0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 0 idle 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:16.318 15:13:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350395 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0' 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350395 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:16.318 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3350395 1 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 1 idle 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350435 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350435 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3350561 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3350395 0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3350395 0 busy 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:16.319 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350395 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350395 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.43 reactor_0 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3350395 1 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3350395 1 busy 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350435 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.27 reactor_1' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350435 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.27 reactor_1 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.577 15:13:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3350561 00:32:26.545 [2024-12-11 15:13:19.354114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800d0 is same with the state(6) to be set 00:32:26.545 [2024-12-11 15:13:19.354162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800d0 is same with the state(6) to be set 00:32:26.545 [2024-12-11 15:13:19.354171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800d0 is same with the state(6) to be set 00:32:26.545 [2024-12-11 15:13:19.354177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c800d0 is same with the state(6) to be set 00:32:26.545 Initializing NVMe Controllers 00:32:26.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:26.545 Controller IO queue size 256, less than required. 00:32:26.545 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:26.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:26.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:26.545 Initialization complete. Launching workers. 00:32:26.545 ======================================================== 00:32:26.545 Latency(us) 00:32:26.545 Device Information : IOPS MiB/s Average min max 00:32:26.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16578.39 64.76 15449.41 3752.80 28544.01 00:32:26.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16358.39 63.90 15658.08 3692.29 28824.20 00:32:26.545 ======================================================== 00:32:26.545 Total : 32936.79 128.66 15553.05 3692.29 28824.20 00:32:26.545 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3350395 0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 0 idle 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350395 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0' 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350395 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.23 reactor_0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3350395 1 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 1 idle 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:26.545 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350435 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350435 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:26.805 15:13:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:27.373 15:13:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:27.373 15:13:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:27.373 15:13:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:27.373 15:13:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:27.373 15:13:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:29.277 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3350395 0 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 0 idle 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:29.278 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350395 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.51 reactor_0' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350395 root 20 0 128.2g 72960 33792 S 6.7 0.0 0:20.51 reactor_0 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3350395 1 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3350395 1 idle 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3350395 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3350395 -w 256 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3350435 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3350435 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:29.537 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:29.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.796 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.796 rmmod nvme_tcp 00:32:29.796 rmmod nvme_fabrics 00:32:29.796 rmmod nvme_keyring 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3350395 ']' 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3350395 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3350395 ']' 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3350395 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3350395 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3350395' 00:32:29.797 killing process with pid 3350395 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3350395 00:32:29.797 15:13:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3350395 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:30.056 15:13:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:32.592 15:13:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:32.592 00:32:32.592 real 0m22.774s 00:32:32.592 user 0m39.676s 00:32:32.592 sys 0m8.374s 00:32:32.592 15:13:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.592 15:13:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:32.592 ************************************ 00:32:32.592 END TEST nvmf_interrupt 00:32:32.592 ************************************ 00:32:32.592 00:32:32.592 real 27m22.074s 00:32:32.592 user 56m25.204s 00:32:32.592 sys 9m21.759s 00:32:32.592 15:13:25 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:32.592 15:13:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.592 ************************************ 00:32:32.592 END TEST nvmf_tcp 00:32:32.592 ************************************ 00:32:32.592 15:13:25 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:32.592 15:13:25 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:32.592 15:13:25 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:32.592 15:13:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.592 15:13:25 -- common/autotest_common.sh@10 -- # set +x 00:32:32.592 ************************************ 00:32:32.592 START TEST spdkcli_nvmf_tcp 00:32:32.592 ************************************ 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:32.592 * Looking for test storage... 00:32:32.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.592 --rc genhtml_branch_coverage=1 00:32:32.592 --rc genhtml_function_coverage=1 00:32:32.592 --rc genhtml_legend=1 00:32:32.592 --rc geninfo_all_blocks=1 00:32:32.592 --rc geninfo_unexecuted_blocks=1 00:32:32.592 00:32:32.592 ' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.592 --rc genhtml_branch_coverage=1 00:32:32.592 --rc genhtml_function_coverage=1 00:32:32.592 --rc genhtml_legend=1 00:32:32.592 --rc geninfo_all_blocks=1 00:32:32.592 --rc geninfo_unexecuted_blocks=1 00:32:32.592 00:32:32.592 ' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.592 --rc genhtml_branch_coverage=1 00:32:32.592 --rc genhtml_function_coverage=1 00:32:32.592 --rc genhtml_legend=1 00:32:32.592 --rc geninfo_all_blocks=1 00:32:32.592 --rc geninfo_unexecuted_blocks=1 00:32:32.592 00:32:32.592 ' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:32.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:32.592 --rc genhtml_branch_coverage=1 00:32:32.592 --rc genhtml_function_coverage=1 00:32:32.592 --rc genhtml_legend=1 00:32:32.592 --rc geninfo_all_blocks=1 00:32:32.592 --rc geninfo_unexecuted_blocks=1 00:32:32.592 00:32:32.592 ' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.592 15:13:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:32.593 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3353249 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3353249 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3353249 ']' 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.593 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.593 [2024-12-11 15:13:25.497708] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:32:32.593 [2024-12-11 15:13:25.497757] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353249 ] 00:32:32.593 [2024-12-11 15:13:25.570713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:32.593 [2024-12-11 15:13:25.613377] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:32.593 [2024-12-11 15:13:25.613380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:32.851 15:13:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:32.851 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:32.851 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:32.851 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:32.851 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:32.851 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:32.851 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:32.851 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.851 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.851 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:32.851 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:32.852 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:32.852 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:32.852 ' 00:32:36.135 [2024-12-11 15:13:28.441121] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.069 [2024-12-11 15:13:29.785656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:39.599 [2024-12-11 15:13:32.277381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:41.497 [2024-12-11 15:13:34.432070] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:43.397 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:43.397 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:43.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:43.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:43.397 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:43.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:43.397 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:43.397 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:43.397 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.397 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.397 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:43.397 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.398 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.398 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:43.398 15:13:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdkcli.py ll /nvmf 00:32:43.656 15:13:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:43.656 15:13:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:43.656 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:43.656 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:43.656 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.914 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:43.914 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:43.914 15:13:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:43.914 15:13:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:43.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:43.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:43.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:43.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:43.914 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:43.914 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:43.914 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:43.914 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:43.914 ' 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:49.181 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:49.181 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:49.181 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:49.182 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3353249 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3353249 ']' 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3353249 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3353249 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3353249' 00:32:49.440 killing process with pid 3353249 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3353249 00:32:49.440 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3353249 00:32:49.699 15:13:42 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:49.699 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3353249 ']' 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3353249 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3353249 ']' 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3353249 00:32:49.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (3353249) - No such process 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3353249 is not found' 00:32:49.700 Process with pid 3353249 is not found 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:49.700 00:32:49.700 real 0m17.348s 00:32:49.700 user 0m38.232s 00:32:49.700 sys 0m0.782s 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.700 15:13:42 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:49.700 ************************************ 00:32:49.700 END TEST spdkcli_nvmf_tcp 00:32:49.700 ************************************ 00:32:49.700 15:13:42 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:49.700 15:13:42 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:49.700 15:13:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.700 15:13:42 -- common/autotest_common.sh@10 -- # set +x 00:32:49.700 ************************************ 00:32:49.700 START TEST nvmf_identify_passthru 00:32:49.700 ************************************ 00:32:49.700 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:49.700 * Looking for test storage... 00:32:49.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:49.700 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:49.700 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:32:49.700 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:49.959 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.959 15:13:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.960 --rc genhtml_branch_coverage=1 00:32:49.960 --rc genhtml_function_coverage=1 00:32:49.960 --rc genhtml_legend=1 00:32:49.960 --rc geninfo_all_blocks=1 00:32:49.960 --rc geninfo_unexecuted_blocks=1 00:32:49.960 00:32:49.960 ' 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.960 --rc genhtml_branch_coverage=1 00:32:49.960 --rc genhtml_function_coverage=1 00:32:49.960 --rc genhtml_legend=1 00:32:49.960 --rc geninfo_all_blocks=1 00:32:49.960 --rc geninfo_unexecuted_blocks=1 00:32:49.960 00:32:49.960 ' 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.960 --rc genhtml_branch_coverage=1 00:32:49.960 --rc genhtml_function_coverage=1 00:32:49.960 --rc genhtml_legend=1 00:32:49.960 --rc geninfo_all_blocks=1 00:32:49.960 --rc geninfo_unexecuted_blocks=1 00:32:49.960 00:32:49.960 ' 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:49.960 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.960 --rc genhtml_branch_coverage=1 00:32:49.960 --rc genhtml_function_coverage=1 00:32:49.960 --rc genhtml_legend=1 00:32:49.960 --rc geninfo_all_blocks=1 00:32:49.960 --rc geninfo_unexecuted_blocks=1 00:32:49.960 00:32:49.960 ' 00:32:49.960 15:13:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:49.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.960 15:13:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:49.960 15:13:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.960 15:13:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.960 15:13:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.960 15:13:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.528 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:56.529 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:56.529 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:56.529 Found net devices under 0000:86:00.0: cvl_0_0 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:56.529 Found net devices under 0000:86:00.1: cvl_0_1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:32:56.529 00:32:56.529 --- 10.0.0.2 ping statistics --- 00:32:56.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.529 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:32:56.529 00:32:56.529 --- 10.0.0.1 ping statistics --- 00:32:56.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.529 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:56.529 15:13:48 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:56.529 15:13:48 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:56.529 15:13:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:00.715 15:13:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:00.715 15:13:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:00.715 15:13:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:00.715 15:13:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3360505 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3360505 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3360505 ']' 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.014 [2024-12-11 15:13:57.320188] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:33:05.014 [2024-12-11 15:13:57.320236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.014 [2024-12-11 15:13:57.400924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:05.014 [2024-12-11 15:13:57.442581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.014 [2024-12-11 15:13:57.442620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.014 [2024-12-11 15:13:57.442628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.014 [2024-12-11 15:13:57.442634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.014 [2024-12-11 15:13:57.442639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.014 [2024-12-11 15:13:57.444195] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.014 [2024-12-11 15:13:57.444307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:05.014 [2024-12-11 15:13:57.444414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.014 [2024-12-11 15:13:57.444415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.014 INFO: Log level set to 20 00:33:05.014 INFO: Requests: 00:33:05.014 { 00:33:05.014 "jsonrpc": "2.0", 00:33:05.014 "method": "nvmf_set_config", 00:33:05.014 "id": 1, 00:33:05.014 "params": { 00:33:05.014 "admin_cmd_passthru": { 00:33:05.014 "identify_ctrlr": true 00:33:05.014 } 00:33:05.014 } 00:33:05.014 } 00:33:05.014 00:33:05.014 INFO: response: 00:33:05.014 { 00:33:05.014 "jsonrpc": "2.0", 00:33:05.014 "id": 1, 00:33:05.014 "result": true 00:33:05.014 } 00:33:05.014 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.014 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.014 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.014 INFO: Setting log level to 20 00:33:05.014 INFO: Setting log level to 20 00:33:05.014 INFO: Log level set to 20 00:33:05.014 INFO: Log level set to 20 00:33:05.014 INFO: Requests: 00:33:05.014 { 00:33:05.014 "jsonrpc": "2.0", 00:33:05.014 "method": "framework_start_init", 00:33:05.014 "id": 1 00:33:05.014 } 00:33:05.014 00:33:05.014 INFO: Requests: 00:33:05.014 { 00:33:05.014 "jsonrpc": "2.0", 00:33:05.015 "method": "framework_start_init", 00:33:05.015 "id": 1 00:33:05.015 } 00:33:05.015 00:33:05.015 [2024-12-11 15:13:57.544431] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:05.015 INFO: response: 00:33:05.015 { 00:33:05.015 "jsonrpc": "2.0", 00:33:05.015 "id": 1, 00:33:05.015 "result": true 00:33:05.015 } 00:33:05.015 00:33:05.015 INFO: response: 00:33:05.015 { 00:33:05.015 "jsonrpc": "2.0", 00:33:05.015 "id": 1, 00:33:05.015 "result": true 00:33:05.015 } 00:33:05.015 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.015 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.015 INFO: Setting log level to 40 00:33:05.015 INFO: Setting log level to 40 00:33:05.015 INFO: Setting log level to 40 00:33:05.015 [2024-12-11 15:13:57.557705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.015 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:05.015 15:13:57 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.015 15:13:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.543 Nvme0n1 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.543 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.543 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.543 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.543 [2024-12-11 15:14:00.468461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.543 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.543 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:07.543 [ 00:33:07.543 { 00:33:07.543 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:07.543 "subtype": "Discovery", 00:33:07.543 "listen_addresses": [], 00:33:07.543 "allow_any_host": true, 00:33:07.543 "hosts": [] 00:33:07.543 }, 00:33:07.543 { 00:33:07.543 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:07.543 "subtype": "NVMe", 00:33:07.543 "listen_addresses": [ 00:33:07.543 { 00:33:07.543 "trtype": "TCP", 00:33:07.543 "adrfam": "IPv4", 00:33:07.543 "traddr": "10.0.0.2", 00:33:07.543 "trsvcid": "4420" 00:33:07.543 } 00:33:07.543 ], 00:33:07.544 "allow_any_host": true, 00:33:07.544 "hosts": [], 00:33:07.544 "serial_number": "SPDK00000000000001", 00:33:07.544 "model_number": "SPDK bdev Controller", 00:33:07.544 "max_namespaces": 1, 00:33:07.544 "min_cntlid": 1, 00:33:07.544 "max_cntlid": 65519, 00:33:07.544 "namespaces": [ 00:33:07.544 { 00:33:07.544 "nsid": 1, 00:33:07.544 "bdev_name": "Nvme0n1", 00:33:07.544 "name": "Nvme0n1", 00:33:07.544 "nguid": "1E65F28E6DF8402485BB0C8E77C0C556", 00:33:07.544 "uuid": "1e65f28e-6df8-4024-85bb-0c8e77c0c556" 00:33:07.544 } 00:33:07.544 ] 00:33:07.544 } 00:33:07.544 ] 00:33:07.544 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.544 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:07.544 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:07.544 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:07.801 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:07.801 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:07.801 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:07.801 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:08.060 15:14:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:08.060 rmmod nvme_tcp 00:33:08.060 rmmod nvme_fabrics 00:33:08.060 rmmod nvme_keyring 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3360505 ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3360505 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3360505 ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3360505 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:08.060 15:14:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3360505 00:33:08.060 15:14:01 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:08.060 15:14:01 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:08.060 15:14:01 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3360505' 00:33:08.060 killing process with pid 3360505 00:33:08.060 15:14:01 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3360505 00:33:08.060 15:14:01 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3360505 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:09.959 15:14:02 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.959 15:14:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:09.959 15:14:02 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.867 15:14:04 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:11.867 00:33:11.867 real 0m21.933s 00:33:11.867 user 0m26.957s 00:33:11.867 sys 0m6.261s 00:33:11.867 15:14:04 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:11.867 15:14:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:11.867 ************************************ 00:33:11.867 END TEST nvmf_identify_passthru 00:33:11.867 ************************************ 00:33:11.867 15:14:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:33:11.867 15:14:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:11.867 15:14:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.867 15:14:04 -- common/autotest_common.sh@10 -- # set +x 00:33:11.867 ************************************ 00:33:11.867 START TEST nvmf_dif 00:33:11.867 ************************************ 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:33:11.867 * Looking for test storage... 00:33:11.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.867 --rc genhtml_branch_coverage=1 00:33:11.867 --rc genhtml_function_coverage=1 00:33:11.867 --rc genhtml_legend=1 00:33:11.867 --rc geninfo_all_blocks=1 00:33:11.867 --rc geninfo_unexecuted_blocks=1 00:33:11.867 00:33:11.867 ' 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.867 --rc genhtml_branch_coverage=1 00:33:11.867 --rc genhtml_function_coverage=1 00:33:11.867 --rc genhtml_legend=1 00:33:11.867 --rc geninfo_all_blocks=1 00:33:11.867 --rc geninfo_unexecuted_blocks=1 00:33:11.867 00:33:11.867 ' 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.867 --rc genhtml_branch_coverage=1 00:33:11.867 --rc genhtml_function_coverage=1 00:33:11.867 --rc genhtml_legend=1 00:33:11.867 --rc geninfo_all_blocks=1 00:33:11.867 --rc geninfo_unexecuted_blocks=1 00:33:11.867 00:33:11.867 ' 00:33:11.867 15:14:04 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:11.867 --rc genhtml_branch_coverage=1 00:33:11.867 --rc genhtml_function_coverage=1 00:33:11.867 --rc genhtml_legend=1 00:33:11.867 --rc geninfo_all_blocks=1 00:33:11.867 --rc geninfo_unexecuted_blocks=1 00:33:11.867 00:33:11.867 ' 00:33:11.867 15:14:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:11.867 15:14:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:11.867 15:14:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.867 15:14:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.867 15:14:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.867 15:14:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:11.867 15:14:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:11.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:11.867 15:14:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:11.867 15:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:11.867 15:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:11.867 15:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:11.867 15:14:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:11.868 15:14:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.868 15:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:11.868 15:14:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:11.868 15:14:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:33:11.868 15:14:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:18.441 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:18.441 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:18.441 Found net devices under 0000:86:00.0: cvl_0_0 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:18.441 Found net devices under 0000:86:00.1: cvl_0_1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:18.441 15:14:10 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:18.442 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:18.442 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.426 ms 00:33:18.442 00:33:18.442 --- 10.0.0.2 ping statistics --- 00:33:18.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.442 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:18.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:18.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:33:18.442 00:33:18.442 --- 10.0.0.1 ping statistics --- 00:33:18.442 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:18.442 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:33:18.442 15:14:10 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:33:20.348 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:20.348 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:20.348 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.607 15:14:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:20.607 15:14:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3365966 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3365966 00:33:20.607 15:14:13 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3365966 ']' 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:20.607 15:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.607 [2024-12-11 15:14:13.561233] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:33:20.607 [2024-12-11 15:14:13.561278] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.607 [2024-12-11 15:14:13.641138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.865 [2024-12-11 15:14:13.681729] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.865 [2024-12-11 15:14:13.681762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.865 [2024-12-11 15:14:13.681773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.865 [2024-12-11 15:14:13.681780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.865 [2024-12-11 15:14:13.681787] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.866 [2024-12-11 15:14:13.682384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:20.866 15:14:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 15:14:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.866 15:14:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:20.866 15:14:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 [2024-12-11 15:14:13.821767] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.866 15:14:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 ************************************ 00:33:20.866 START TEST fio_dif_1_default 00:33:20.866 ************************************ 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 bdev_null0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:20.866 [2024-12-11 15:14:13.890074] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:20.866 { 00:33:20.866 "params": { 00:33:20.866 "name": "Nvme$subsystem", 00:33:20.866 "trtype": "$TEST_TRANSPORT", 00:33:20.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:20.866 "adrfam": "ipv4", 00:33:20.866 "trsvcid": "$NVMF_PORT", 00:33:20.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:20.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:20.866 "hdgst": ${hdgst:-false}, 00:33:20.866 "ddgst": ${ddgst:-false} 00:33:20.866 }, 00:33:20.866 "method": "bdev_nvme_attach_controller" 00:33:20.866 } 00:33:20.866 EOF 00:33:20.866 )") 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:33:20.866 15:14:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:20.866 "params": { 00:33:20.866 "name": "Nvme0", 00:33:20.866 "trtype": "tcp", 00:33:20.866 "traddr": "10.0.0.2", 00:33:20.866 "adrfam": "ipv4", 00:33:20.866 "trsvcid": "4420", 00:33:20.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.866 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.866 "hdgst": false, 00:33:20.866 "ddgst": false 00:33:20.866 }, 00:33:20.866 "method": "bdev_nvme_attach_controller" 00:33:20.866 }' 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:33:21.124 15:14:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:21.382 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:21.382 fio-3.35 00:33:21.382 Starting 1 thread 00:33:33.594 00:33:33.594 filename0: (groupid=0, jobs=1): err= 0: pid=3366342: Wed Dec 11 15:14:24 2024 00:33:33.594 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:33:33.594 slat (nsec): min=6078, max=26050, avg=6407.73, stdev=785.83 00:33:33.594 clat (usec): min=40831, max=43703, avg=41001.41, stdev=203.26 00:33:33.594 lat (usec): min=40837, max=43729, avg=41007.81, stdev=203.69 00:33:33.594 clat percentiles (usec): 00:33:33.594 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:33.594 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.594 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:33.594 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:33:33.594 | 99.99th=[43779] 00:33:33.594 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:33:33.594 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:33:33.594 lat (msec) : 50=100.00% 00:33:33.594 cpu : usr=92.38%, sys=7.37%, ctx=13, majf=0, minf=0 00:33:33.594 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.594 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.594 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.594 00:33:33.594 Run status group 0 (all jobs): 00:33:33.594 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.594 00:33:33.594 real 0m11.237s 00:33:33.594 user 0m16.121s 00:33:33.594 sys 0m1.006s 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 ************************************ 00:33:33.594 END TEST fio_dif_1_default 00:33:33.594 ************************************ 00:33:33.594 15:14:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:33.594 15:14:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:33.594 15:14:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 ************************************ 00:33:33.594 START TEST fio_dif_1_multi_subsystems 00:33:33.594 ************************************ 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 bdev_null0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.594 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 [2024-12-11 15:14:25.199588] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 bdev_null1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.595 { 00:33:33.595 "params": { 00:33:33.595 "name": "Nvme$subsystem", 00:33:33.595 "trtype": "$TEST_TRANSPORT", 00:33:33.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.595 "adrfam": "ipv4", 00:33:33.595 "trsvcid": "$NVMF_PORT", 00:33:33.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.595 "hdgst": ${hdgst:-false}, 00:33:33.595 "ddgst": ${ddgst:-false} 00:33:33.595 }, 00:33:33.595 "method": "bdev_nvme_attach_controller" 00:33:33.595 } 00:33:33.595 EOF 00:33:33.595 )") 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:33.595 { 00:33:33.595 "params": { 00:33:33.595 "name": "Nvme$subsystem", 00:33:33.595 "trtype": "$TEST_TRANSPORT", 00:33:33.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:33.595 "adrfam": "ipv4", 00:33:33.595 "trsvcid": "$NVMF_PORT", 00:33:33.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:33.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:33.595 "hdgst": ${hdgst:-false}, 00:33:33.595 "ddgst": ${ddgst:-false} 00:33:33.595 }, 00:33:33.595 "method": "bdev_nvme_attach_controller" 00:33:33.595 } 00:33:33.595 EOF 00:33:33.595 )") 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:33.595 "params": { 00:33:33.595 "name": "Nvme0", 00:33:33.595 "trtype": "tcp", 00:33:33.595 "traddr": "10.0.0.2", 00:33:33.595 "adrfam": "ipv4", 00:33:33.595 "trsvcid": "4420", 00:33:33.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:33.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:33.595 "hdgst": false, 00:33:33.595 "ddgst": false 00:33:33.595 }, 00:33:33.595 "method": "bdev_nvme_attach_controller" 00:33:33.595 },{ 00:33:33.595 "params": { 00:33:33.595 "name": "Nvme1", 00:33:33.595 "trtype": "tcp", 00:33:33.595 "traddr": "10.0.0.2", 00:33:33.595 "adrfam": "ipv4", 00:33:33.595 "trsvcid": "4420", 00:33:33.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:33.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:33.595 "hdgst": false, 00:33:33.595 "ddgst": false 00:33:33.595 }, 00:33:33.595 "method": "bdev_nvme_attach_controller" 00:33:33.595 }' 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:33:33.595 15:14:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:33.595 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:33.595 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:33.595 fio-3.35 00:33:33.595 Starting 2 threads 00:33:43.576 00:33:43.576 filename0: (groupid=0, jobs=1): err= 0: pid=3368309: Wed Dec 11 15:14:36 2024 00:33:43.576 read: IOPS=237, BW=951KiB/s (974kB/s)(9552KiB/10042msec) 00:33:43.576 slat (nsec): min=5868, max=68062, avg=8603.92, stdev=5153.16 00:33:43.576 clat (usec): min=351, max=42521, avg=16794.09, stdev=20144.30 00:33:43.576 lat (usec): min=358, max=42527, avg=16802.69, stdev=20143.47 00:33:43.576 clat percentiles (usec): 00:33:43.576 | 1.00th=[ 388], 5.00th=[ 396], 10.00th=[ 408], 20.00th=[ 424], 00:33:43.576 | 30.00th=[ 441], 40.00th=[ 453], 50.00th=[ 494], 60.00th=[ 725], 00:33:43.576 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:33:43.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:43.576 | 99.99th=[42730] 00:33:43.576 bw ( KiB/s): min= 448, max= 1344, per=71.04%, avg=953.60, stdev=235.75, samples=20 00:33:43.576 iops : min= 112, max= 336, avg=238.40, stdev=58.94, samples=20 00:33:43.576 lat (usec) : 500=51.13%, 750=9.00% 00:33:43.576 lat (msec) : 2=0.17%, 50=39.70% 00:33:43.576 cpu : usr=98.30%, sys=1.40%, ctx=35, majf=0, minf=122 00:33:43.576 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.576 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.576 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:43.576 filename1: (groupid=0, jobs=1): err= 0: pid=3368310: Wed Dec 11 15:14:36 2024 00:33:43.576 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10005msec) 00:33:43.576 slat (nsec): min=6376, max=38047, avg=11879.02, stdev=8572.53 00:33:43.576 clat (usec): min=392, max=42367, avg=40795.53, stdev=5840.84 00:33:43.576 lat (usec): min=399, max=42394, avg=40807.41, stdev=5840.79 00:33:43.576 clat percentiles (usec): 00:33:43.576 | 1.00th=[ 445], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:43.576 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:33:43.576 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:43.576 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:43.576 | 99.99th=[42206] 00:33:43.576 bw ( KiB/s): min= 352, max= 480, per=29.22%, avg=392.42, stdev=25.78, samples=19 00:33:43.576 iops : min= 88, max= 120, avg=98.11, stdev= 6.45, samples=19 00:33:43.576 lat (usec) : 500=1.33%, 750=0.71% 00:33:43.576 lat (msec) : 50=97.96% 00:33:43.576 cpu : usr=98.31%, sys=1.40%, ctx=11, majf=0, minf=81 00:33:43.576 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:43.576 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.576 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:43.576 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:43.576 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:43.576 00:33:43.576 Run status group 0 (all jobs): 00:33:43.576 READ: bw=1342KiB/s (1374kB/s), 392KiB/s-951KiB/s (401kB/s-974kB/s), io=13.2MiB (13.8MB), run=10005-10042msec 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 00:33:43.836 real 0m11.540s 00:33:43.836 user 0m26.770s 00:33:43.836 sys 0m0.669s 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 ************************************ 00:33:43.836 END TEST fio_dif_1_multi_subsystems 00:33:43.836 ************************************ 00:33:43.836 15:14:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:43.836 15:14:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:43.836 15:14:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 ************************************ 00:33:43.836 START TEST fio_dif_rand_params 00:33:43.836 ************************************ 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 bdev_null0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:43.836 [2024-12-11 15:14:36.818695] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:43.836 { 00:33:43.836 "params": { 00:33:43.836 "name": "Nvme$subsystem", 00:33:43.836 "trtype": "$TEST_TRANSPORT", 00:33:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:43.836 "adrfam": "ipv4", 00:33:43.836 "trsvcid": "$NVMF_PORT", 00:33:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:43.836 "hdgst": ${hdgst:-false}, 00:33:43.836 "ddgst": ${ddgst:-false} 00:33:43.836 }, 00:33:43.836 "method": "bdev_nvme_attach_controller" 00:33:43.836 } 00:33:43.836 EOF 00:33:43.836 )") 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:43.836 "params": { 00:33:43.836 "name": "Nvme0", 00:33:43.836 "trtype": "tcp", 00:33:43.836 "traddr": "10.0.0.2", 00:33:43.836 "adrfam": "ipv4", 00:33:43.836 "trsvcid": "4420", 00:33:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:43.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:43.836 "hdgst": false, 00:33:43.836 "ddgst": false 00:33:43.836 }, 00:33:43.836 "method": "bdev_nvme_attach_controller" 00:33:43.836 }' 00:33:43.836 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:43.837 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:43.837 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:43.837 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:43.837 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:43.837 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:44.126 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:44.126 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:44.126 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:33:44.126 15:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:44.384 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:44.384 ... 00:33:44.384 fio-3.35 00:33:44.384 Starting 3 threads 00:33:49.753 00:33:49.753 filename0: (groupid=0, jobs=1): err= 0: pid=3370270: Wed Dec 11 15:14:42 2024 00:33:49.753 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(193MiB/5004msec) 00:33:49.753 slat (nsec): min=6673, max=47890, avg=23148.76, stdev=8026.67 00:33:49.753 clat (usec): min=4158, max=51633, avg=9700.19, stdev=6906.88 00:33:49.753 lat (usec): min=4164, max=51661, avg=9723.34, stdev=6906.34 00:33:49.753 clat percentiles (usec): 00:33:49.753 | 1.00th=[ 4817], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 7767], 00:33:49.753 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8848], 00:33:49.753 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:33:49.753 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[51643], 00:33:49.753 | 99.99th=[51643] 00:33:49.753 bw ( KiB/s): min=26880, max=44544, per=33.74%, avg=39054.22, stdev=4929.43, samples=9 00:33:49.753 iops : min= 210, max= 348, avg=305.11, stdev=38.51, samples=9 00:33:49.753 lat (msec) : 10=89.44%, 20=7.65%, 50=2.14%, 100=0.78% 00:33:49.753 cpu : usr=97.06%, sys=2.58%, ctx=6, majf=0, minf=9 00:33:49.753 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 issued rwts: total=1543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.753 filename0: (groupid=0, jobs=1): err= 0: pid=3370271: Wed Dec 11 15:14:42 2024 00:33:49.753 read: IOPS=300, BW=37.5MiB/s (39.3MB/s)(188MiB/5002msec) 00:33:49.753 slat (nsec): min=6467, max=38733, avg=16585.35, stdev=6254.16 00:33:49.753 clat (usec): min=3065, max=52083, avg=9977.94, stdev=4372.75 00:33:49.753 lat (usec): min=3076, max=52104, avg=9994.52, stdev=4373.24 00:33:49.753 clat percentiles (usec): 00:33:49.753 | 1.00th=[ 4080], 5.00th=[ 6259], 10.00th=[ 6652], 20.00th=[ 7832], 00:33:49.753 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:33:49.753 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12125], 00:33:49.753 | 99.00th=[13829], 99.50th=[49546], 99.90th=[51119], 99.95th=[52167], 00:33:49.753 | 99.99th=[52167] 00:33:49.753 bw ( KiB/s): min=35328, max=40960, per=33.35%, avg=38599.11, stdev=1986.18, samples=9 00:33:49.753 iops : min= 276, max= 320, avg=301.56, stdev=15.52, samples=9 00:33:49.753 lat (msec) : 4=0.67%, 10=48.83%, 20=49.50%, 50=0.80%, 100=0.20% 00:33:49.753 cpu : usr=95.86%, sys=3.84%, ctx=7, majf=0, minf=9 00:33:49.753 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 issued rwts: total=1501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.753 filename0: (groupid=0, jobs=1): err= 0: pid=3370272: Wed Dec 11 15:14:42 2024 00:33:49.753 read: IOPS=296, BW=37.0MiB/s (38.8MB/s)(185MiB/5002msec) 00:33:49.753 slat (nsec): min=6397, max=38669, avg=15809.85, stdev=6289.23 00:33:49.753 clat (usec): min=3510, max=53908, avg=10113.66, stdev=5110.04 00:33:49.753 lat (usec): min=3517, max=53926, avg=10129.47, stdev=5110.78 00:33:49.753 clat percentiles (usec): 00:33:49.753 | 1.00th=[ 4178], 5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 8029], 00:33:49.753 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10028], 60.00th=[10421], 00:33:49.753 | 70.00th=[10814], 80.00th=[11207], 90.00th=[11731], 95.00th=[12256], 00:33:49.753 | 99.00th=[48497], 99.50th=[50070], 99.90th=[53216], 99.95th=[53740], 00:33:49.753 | 99.99th=[53740] 00:33:49.753 bw ( KiB/s): min=26624, max=46848, per=32.71%, avg=37859.56, stdev=5200.08, samples=9 00:33:49.753 iops : min= 208, max= 366, avg=295.78, stdev=40.63, samples=9 00:33:49.753 lat (msec) : 4=0.68%, 10=49.70%, 20=48.21%, 50=0.88%, 100=0.54% 00:33:49.753 cpu : usr=96.48%, sys=3.20%, ctx=8, majf=0, minf=9 00:33:49.753 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:49.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:49.753 issued rwts: total=1481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:49.753 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:49.753 00:33:49.753 Run status group 0 (all jobs): 00:33:49.753 READ: bw=113MiB/s (119MB/s), 37.0MiB/s-38.5MiB/s (38.8MB/s-40.4MB/s), io=566MiB (593MB), run=5002-5004msec 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 bdev_null0 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 [2024-12-11 15:14:43.037476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.012 bdev_null1 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.012 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 bdev_null2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:50.272 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.272 { 00:33:50.272 "params": { 00:33:50.272 "name": "Nvme$subsystem", 00:33:50.272 "trtype": "$TEST_TRANSPORT", 00:33:50.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.272 "adrfam": "ipv4", 00:33:50.272 "trsvcid": "$NVMF_PORT", 00:33:50.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.272 "hdgst": ${hdgst:-false}, 00:33:50.272 "ddgst": ${ddgst:-false} 00:33:50.272 }, 00:33:50.272 "method": "bdev_nvme_attach_controller" 00:33:50.272 } 00:33:50.272 EOF 00:33:50.272 )") 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.273 { 00:33:50.273 "params": { 00:33:50.273 "name": "Nvme$subsystem", 00:33:50.273 "trtype": "$TEST_TRANSPORT", 00:33:50.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.273 "adrfam": "ipv4", 00:33:50.273 "trsvcid": "$NVMF_PORT", 00:33:50.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.273 "hdgst": ${hdgst:-false}, 00:33:50.273 "ddgst": ${ddgst:-false} 00:33:50.273 }, 00:33:50.273 "method": "bdev_nvme_attach_controller" 00:33:50.273 } 00:33:50.273 EOF 00:33:50.273 )") 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:50.273 { 00:33:50.273 "params": { 00:33:50.273 "name": "Nvme$subsystem", 00:33:50.273 "trtype": "$TEST_TRANSPORT", 00:33:50.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:50.273 "adrfam": "ipv4", 00:33:50.273 "trsvcid": "$NVMF_PORT", 00:33:50.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:50.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:50.273 "hdgst": ${hdgst:-false}, 00:33:50.273 "ddgst": ${ddgst:-false} 00:33:50.273 }, 00:33:50.273 "method": "bdev_nvme_attach_controller" 00:33:50.273 } 00:33:50.273 EOF 00:33:50.273 )") 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:50.273 "params": { 00:33:50.273 "name": "Nvme0", 00:33:50.273 "trtype": "tcp", 00:33:50.273 "traddr": "10.0.0.2", 00:33:50.273 "adrfam": "ipv4", 00:33:50.273 "trsvcid": "4420", 00:33:50.273 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:50.273 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:50.273 "hdgst": false, 00:33:50.273 "ddgst": false 00:33:50.273 }, 00:33:50.273 "method": "bdev_nvme_attach_controller" 00:33:50.273 },{ 00:33:50.273 "params": { 00:33:50.273 "name": "Nvme1", 00:33:50.273 "trtype": "tcp", 00:33:50.273 "traddr": "10.0.0.2", 00:33:50.273 "adrfam": "ipv4", 00:33:50.273 "trsvcid": "4420", 00:33:50.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:50.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:50.273 "hdgst": false, 00:33:50.273 "ddgst": false 00:33:50.273 }, 00:33:50.273 "method": "bdev_nvme_attach_controller" 00:33:50.273 },{ 00:33:50.273 "params": { 00:33:50.273 "name": "Nvme2", 00:33:50.273 "trtype": "tcp", 00:33:50.273 "traddr": "10.0.0.2", 00:33:50.273 "adrfam": "ipv4", 00:33:50.273 "trsvcid": "4420", 00:33:50.273 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:50.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:50.273 "hdgst": false, 00:33:50.273 "ddgst": false 00:33:50.273 }, 00:33:50.273 "method": "bdev_nvme_attach_controller" 00:33:50.273 }' 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:33:50.273 15:14:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:50.531 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:50.531 ... 00:33:50.531 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:50.531 ... 00:33:50.531 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:50.531 ... 00:33:50.531 fio-3.35 00:33:50.531 Starting 24 threads 00:34:02.738 00:34:02.738 filename0: (groupid=0, jobs=1): err= 0: pid=3371350: Wed Dec 11 15:14:54 2024 00:34:02.738 read: IOPS=605, BW=2422KiB/s (2481kB/s)(23.7MiB/10013msec) 00:34:02.738 slat (nsec): min=7229, max=76727, avg=35687.12, stdev=12340.81 00:34:02.738 clat (usec): min=11588, max=29304, avg=26116.58, stdev=1286.28 00:34:02.738 lat (usec): min=11598, max=29329, avg=26152.27, stdev=1287.78 00:34:02.738 clat percentiles (usec): 00:34:02.738 | 1.00th=[23725], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:34:02.738 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.738 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:34:02.738 | 99.00th=[28705], 99.50th=[28967], 99.90th=[28967], 99.95th=[29230], 00:34:02.738 | 99.99th=[29230] 00:34:02.738 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2417.74, stdev=59.17, samples=19 00:34:02.738 iops : min= 574, max= 640, avg=604.32, stdev=14.87, samples=19 00:34:02.738 lat (msec) : 20=0.79%, 50=99.21% 00:34:02.738 cpu : usr=98.30%, sys=1.07%, ctx=84, majf=0, minf=9 00:34:02.738 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.738 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.738 filename0: (groupid=0, jobs=1): err= 0: pid=3371352: Wed Dec 11 15:14:54 2024 00:34:02.738 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10004msec) 00:34:02.738 slat (nsec): min=7664, max=81462, avg=44320.49, stdev=12133.57 00:34:02.738 clat (usec): min=3645, max=49225, avg=26083.68, stdev=1969.79 00:34:02.738 lat (usec): min=3675, max=49268, avg=26128.00, stdev=1969.63 00:34:02.738 clat percentiles (usec): 00:34:02.738 | 1.00th=[23725], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.738 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.738 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:34:02.738 | 99.00th=[28705], 99.50th=[28967], 99.90th=[49021], 99.95th=[49021], 00:34:02.738 | 99.99th=[49021] 00:34:02.738 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2404.26, stdev=68.22, samples=19 00:34:02.738 iops : min= 576, max= 640, avg=600.95, stdev=17.01, samples=19 00:34:02.738 lat (msec) : 4=0.23%, 10=0.03%, 20=0.45%, 50=99.29% 00:34:02.738 cpu : usr=98.19%, sys=1.08%, ctx=112, majf=0, minf=9 00:34:02.738 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.738 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.738 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.738 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.738 filename0: (groupid=0, jobs=1): err= 0: pid=3371353: Wed Dec 11 15:14:54 2024 00:34:02.738 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10018msec) 00:34:02.738 slat (nsec): min=6687, max=78134, avg=34723.88, stdev=12785.83 00:34:02.738 clat (usec): min=11471, max=41942, avg=26114.64, stdev=1697.80 00:34:02.738 lat (usec): min=11485, max=41982, avg=26149.37, stdev=1699.03 00:34:02.738 clat percentiles (usec): 00:34:02.738 | 1.00th=[18744], 5.00th=[25035], 10.00th=[25560], 20.00th=[25560], 00:34:02.738 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.738 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.738 | 99.00th=[28967], 99.50th=[29230], 99.90th=[41681], 99.95th=[41681], 00:34:02.738 | 99.99th=[41681] 00:34:02.738 bw ( KiB/s): min= 2299, max= 2688, per=4.17%, avg=2420.80, stdev=98.39, samples=20 00:34:02.738 iops : min= 574, max= 672, avg=605.10, stdev=24.61, samples=20 00:34:02.738 lat (msec) : 20=1.47%, 50=98.53% 00:34:02.738 cpu : usr=98.55%, sys=1.04%, ctx=30, majf=0, minf=9 00:34:02.739 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=3371354: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=603, BW=2412KiB/s (2470kB/s)(23.6MiB/10002msec) 00:34:02.739 slat (nsec): min=4348, max=78692, avg=38391.24, stdev=13727.99 00:34:02.739 clat (usec): min=15858, max=50972, avg=26225.30, stdev=1651.40 00:34:02.739 lat (usec): min=15910, max=50985, avg=26263.69, stdev=1649.65 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:34:02.739 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28705], 99.50th=[28967], 99.90th=[51119], 99.95th=[51119], 00:34:02.739 | 99.99th=[51119] 00:34:02.739 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2404.79, stdev=68.42, samples=19 00:34:02.739 iops : min= 576, max= 640, avg=601.16, stdev=17.09, samples=19 00:34:02.739 lat (msec) : 20=0.43%, 50=99.30%, 100=0.27% 00:34:02.739 cpu : usr=97.91%, sys=1.22%, ctx=169, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=3371355: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10013msec) 00:34:02.739 slat (nsec): min=6317, max=76792, avg=25138.61, stdev=14237.87 00:34:02.739 clat (usec): min=14025, max=41373, avg=26299.52, stdev=1294.82 00:34:02.739 lat (usec): min=14074, max=41391, avg=26324.65, stdev=1293.11 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.739 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28967], 99.50th=[29230], 99.90th=[38536], 99.95th=[38536], 00:34:02.739 | 99.99th=[41157] 00:34:02.739 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2411.00, stdev=76.88, samples=19 00:34:02.739 iops : min= 576, max= 640, avg=602.63, stdev=19.20, samples=19 00:34:02.739 lat (msec) : 20=0.40%, 50=99.60% 00:34:02.739 cpu : usr=98.64%, sys=0.98%, ctx=17, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=3371356: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10008msec) 00:34:02.739 slat (nsec): min=6368, max=78918, avg=32095.21, stdev=15199.98 00:34:02.739 clat (usec): min=13634, max=33942, avg=26233.41, stdev=1202.89 00:34:02.739 lat (usec): min=13690, max=33962, avg=26265.50, stdev=1201.26 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.739 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28967], 99.50th=[29492], 99.90th=[33817], 99.95th=[33817], 00:34:02.739 | 99.99th=[33817] 00:34:02.739 bw ( KiB/s): min= 2180, max= 2560, per=4.16%, avg=2412.16, stdev=97.40, samples=19 00:34:02.739 iops : min= 545, max= 640, avg=603.00, stdev=24.34, samples=19 00:34:02.739 lat (msec) : 20=0.40%, 50=99.60% 00:34:02.739 cpu : usr=98.65%, sys=0.96%, ctx=14, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=3371357: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=603, BW=2413KiB/s (2470kB/s)(23.6MiB/10001msec) 00:34:02.739 slat (nsec): min=4953, max=99591, avg=43354.63, stdev=17060.95 00:34:02.739 clat (usec): min=15952, max=52409, avg=26118.35, stdev=1617.13 00:34:02.739 lat (usec): min=15969, max=52423, avg=26161.70, stdev=1617.16 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.739 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.739 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:34:02.739 | 99.00th=[28705], 99.50th=[28967], 99.90th=[49546], 99.95th=[49546], 00:34:02.739 | 99.99th=[52167] 00:34:02.739 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2405.00, stdev=68.10, samples=19 00:34:02.739 iops : min= 576, max= 640, avg=601.21, stdev=17.01, samples=19 00:34:02.739 lat (msec) : 20=0.48%, 50=99.49%, 100=0.03% 00:34:02.739 cpu : usr=98.86%, sys=0.70%, ctx=31, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename0: (groupid=0, jobs=1): err= 0: pid=3371358: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=623, BW=2495KiB/s (2555kB/s)(24.5MiB/10042msec) 00:34:02.739 slat (nsec): min=6558, max=98168, avg=39565.36, stdev=20550.09 00:34:02.739 clat (usec): min=10428, max=58955, avg=25198.18, stdev=3286.70 00:34:02.739 lat (usec): min=10447, max=58973, avg=25237.75, stdev=3295.40 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[15270], 5.00th=[17957], 10.00th=[19268], 20.00th=[25297], 00:34:02.739 | 30.00th=[25560], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:34:02.739 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27395], 95.00th=[28181], 00:34:02.739 | 99.00th=[30278], 99.50th=[34341], 99.90th=[49546], 99.95th=[49546], 00:34:02.739 | 99.99th=[58983] 00:34:02.739 bw ( KiB/s): min= 2304, max= 3392, per=4.31%, avg=2500.30, stdev=252.60, samples=20 00:34:02.739 iops : min= 576, max= 848, avg=624.95, stdev=63.19, samples=20 00:34:02.739 lat (msec) : 20=11.21%, 50=88.76%, 100=0.03% 00:34:02.739 cpu : usr=98.44%, sys=1.10%, ctx=43, majf=0, minf=9 00:34:02.739 IO depths : 1=5.1%, 2=10.3%, 4=21.8%, 8=55.2%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename1: (groupid=0, jobs=1): err= 0: pid=3371359: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=603, BW=2413KiB/s (2470kB/s)(23.6MiB/10001msec) 00:34:02.739 slat (nsec): min=4816, max=98134, avg=41575.24, stdev=16270.49 00:34:02.739 clat (usec): min=15819, max=50097, avg=26181.38, stdev=1629.80 00:34:02.739 lat (usec): min=15869, max=50110, avg=26222.96, stdev=1628.06 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.739 | 30.00th=[25822], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28967], 99.50th=[28967], 99.90th=[50070], 99.95th=[50070], 00:34:02.739 | 99.99th=[50070] 00:34:02.739 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2405.00, stdev=68.10, samples=19 00:34:02.739 iops : min= 576, max= 640, avg=601.21, stdev=17.01, samples=19 00:34:02.739 lat (msec) : 20=0.45%, 50=99.29%, 100=0.27% 00:34:02.739 cpu : usr=98.67%, sys=0.82%, ctx=62, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename1: (groupid=0, jobs=1): err= 0: pid=3371360: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=603, BW=2416KiB/s (2474kB/s)(23.6MiB/10015msec) 00:34:02.739 slat (usec): min=6, max=105, avg=30.57, stdev=21.06 00:34:02.739 clat (usec): min=13605, max=38564, avg=26245.03, stdev=1287.64 00:34:02.739 lat (usec): min=13673, max=38580, avg=26275.60, stdev=1284.80 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.739 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28967], 99.50th=[29492], 99.90th=[38536], 99.95th=[38536], 00:34:02.739 | 99.99th=[38536] 00:34:02.739 bw ( KiB/s): min= 2304, max= 2560, per=4.16%, avg=2411.00, stdev=76.88, samples=19 00:34:02.739 iops : min= 576, max= 640, avg=602.63, stdev=19.20, samples=19 00:34:02.739 lat (msec) : 20=0.45%, 50=99.55% 00:34:02.739 cpu : usr=98.63%, sys=0.97%, ctx=21, majf=0, minf=9 00:34:02.739 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.739 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.739 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.739 filename1: (groupid=0, jobs=1): err= 0: pid=3371362: Wed Dec 11 15:14:54 2024 00:34:02.739 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.739 slat (nsec): min=6729, max=75489, avg=24907.79, stdev=13473.90 00:34:02.739 clat (usec): min=11554, max=41361, avg=26216.37, stdev=1473.64 00:34:02.739 lat (usec): min=11587, max=41409, avg=26241.28, stdev=1472.48 00:34:02.739 clat percentiles (usec): 00:34:02.739 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.739 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:34:02.739 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.739 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:02.739 | 99.99th=[41157] 00:34:02.739 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.739 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.740 lat (msec) : 20=1.09%, 50=98.91% 00:34:02.740 cpu : usr=98.17%, sys=1.18%, ctx=65, majf=0, minf=9 00:34:02.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename1: (groupid=0, jobs=1): err= 0: pid=3371363: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=615, BW=2462KiB/s (2521kB/s)(24.1MiB/10010msec) 00:34:02.740 slat (nsec): min=7005, max=61908, avg=15531.26, stdev=8744.18 00:34:02.740 clat (usec): min=1101, max=29023, avg=25871.64, stdev=3263.50 00:34:02.740 lat (usec): min=1115, max=29047, avg=25887.17, stdev=3263.57 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[ 2180], 5.00th=[25035], 10.00th=[25560], 20.00th=[25822], 00:34:02.740 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28443], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[28967], 99.95th=[28967], 00:34:02.740 | 99.99th=[28967] 00:34:02.740 bw ( KiB/s): min= 2304, max= 3328, per=4.25%, avg=2465.16, stdev=221.14, samples=19 00:34:02.740 iops : min= 576, max= 832, avg=616.21, stdev=55.30, samples=19 00:34:02.740 lat (msec) : 2=0.78%, 4=0.26%, 10=0.78%, 20=1.04%, 50=97.14% 00:34:02.740 cpu : usr=98.48%, sys=1.10%, ctx=38, majf=0, minf=9 00:34:02.740 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename1: (groupid=0, jobs=1): err= 0: pid=3371364: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.740 slat (usec): min=7, max=108, avg=30.64, stdev=12.90 00:34:02.740 clat (usec): min=11517, max=29319, avg=26163.59, stdev=1428.31 00:34:02.740 lat (usec): min=11544, max=29333, avg=26194.23, stdev=1427.90 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.740 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:02.740 | 99.99th=[29230] 00:34:02.740 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.740 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.740 lat (msec) : 20=1.04%, 50=98.96% 00:34:02.740 cpu : usr=98.66%, sys=0.93%, ctx=23, majf=0, minf=9 00:34:02.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename1: (groupid=0, jobs=1): err= 0: pid=3371365: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=601, BW=2404KiB/s (2462kB/s)(23.6MiB/10041msec) 00:34:02.740 slat (nsec): min=4634, max=82773, avg=41669.20, stdev=12955.47 00:34:02.740 clat (usec): min=15830, max=49769, avg=26182.35, stdev=1635.52 00:34:02.740 lat (usec): min=15881, max=49783, avg=26224.02, stdev=1634.26 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.740 | 30.00th=[25822], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[28181], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[49546], 99.95th=[49546], 00:34:02.740 | 99.99th=[49546] 00:34:02.740 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2408.50, stdev=68.10, samples=20 00:34:02.740 iops : min= 576, max= 640, avg=602.05, stdev=16.98, samples=20 00:34:02.740 lat (msec) : 20=0.43%, 50=99.57% 00:34:02.740 cpu : usr=98.18%, sys=1.21%, ctx=133, majf=0, minf=9 00:34:02.740 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename1: (groupid=0, jobs=1): err= 0: pid=3371366: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.6MiB/10009msec) 00:34:02.740 slat (nsec): min=4320, max=64457, avg=21997.34, stdev=11350.33 00:34:02.740 clat (usec): min=6755, max=52400, avg=26254.53, stdev=1774.04 00:34:02.740 lat (usec): min=6764, max=52418, avg=26276.53, stdev=1775.12 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[23200], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.740 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26084], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[41681], 99.95th=[41681], 00:34:02.740 | 99.99th=[52167] 00:34:02.740 bw ( KiB/s): min= 2304, max= 2554, per=4.15%, avg=2406.74, stdev=64.44, samples=19 00:34:02.740 iops : min= 576, max= 638, avg=601.58, stdev=16.02, samples=19 00:34:02.740 lat (msec) : 10=0.26%, 20=0.46%, 50=99.24%, 100=0.03% 00:34:02.740 cpu : usr=98.30%, sys=1.06%, ctx=126, majf=0, minf=9 00:34:02.740 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename1: (groupid=0, jobs=1): err= 0: pid=3371367: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.740 slat (nsec): min=10960, max=77385, avg=31778.90, stdev=13383.33 00:34:02.740 clat (usec): min=11568, max=29344, avg=26155.44, stdev=1430.13 00:34:02.740 lat (usec): min=11599, max=29366, avg=26187.22, stdev=1429.85 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:34:02.740 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:02.740 | 99.99th=[29230] 00:34:02.740 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.740 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.740 lat (msec) : 20=1.06%, 50=98.94% 00:34:02.740 cpu : usr=98.53%, sys=1.03%, ctx=86, majf=0, minf=9 00:34:02.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename2: (groupid=0, jobs=1): err= 0: pid=3371368: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.740 slat (usec): min=7, max=120, avg=36.78, stdev=17.30 00:34:02.740 clat (usec): min=11553, max=29349, avg=26051.56, stdev=1408.43 00:34:02.740 lat (usec): min=11593, max=29389, avg=26088.35, stdev=1410.89 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:34:02.740 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:34:02.740 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:34:02.740 | 99.99th=[29230] 00:34:02.740 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.740 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.740 lat (msec) : 20=1.06%, 50=98.94% 00:34:02.740 cpu : usr=98.90%, sys=0.68%, ctx=20, majf=0, minf=9 00:34:02.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename2: (groupid=0, jobs=1): err= 0: pid=3371369: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=603, BW=2416KiB/s (2474kB/s)(23.6MiB/10014msec) 00:34:02.740 slat (usec): min=6, max=129, avg=43.08, stdev=18.47 00:34:02.740 clat (usec): min=15707, max=36793, avg=26115.43, stdev=1198.10 00:34:02.740 lat (usec): min=15724, max=36810, avg=26158.51, stdev=1197.37 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.740 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.740 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[28181], 00:34:02.740 | 99.00th=[28705], 99.50th=[28967], 99.90th=[36963], 99.95th=[36963], 00:34:02.740 | 99.99th=[36963] 00:34:02.740 bw ( KiB/s): min= 2299, max= 2560, per=4.16%, avg=2411.47, stdev=77.47, samples=19 00:34:02.740 iops : min= 574, max= 640, avg=602.79, stdev=19.42, samples=19 00:34:02.740 lat (msec) : 20=0.53%, 50=99.47% 00:34:02.740 cpu : usr=98.60%, sys=0.89%, ctx=63, majf=0, minf=9 00:34:02.740 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.740 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.740 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.740 filename2: (groupid=0, jobs=1): err= 0: pid=3371370: Wed Dec 11 15:14:54 2024 00:34:02.740 read: IOPS=604, BW=2416KiB/s (2474kB/s)(23.6MiB/10012msec) 00:34:02.740 slat (usec): min=6, max=103, avg=44.26, stdev=17.49 00:34:02.740 clat (usec): min=15707, max=35352, avg=26104.74, stdev=1152.56 00:34:02.740 lat (usec): min=15745, max=35371, avg=26149.00, stdev=1152.34 00:34:02.740 clat percentiles (usec): 00:34:02.740 | 1.00th=[24249], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.740 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.741 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:34:02.741 | 99.00th=[28705], 99.50th=[28967], 99.90th=[35390], 99.95th=[35390], 00:34:02.741 | 99.99th=[35390] 00:34:02.741 bw ( KiB/s): min= 2180, max= 2560, per=4.16%, avg=2411.95, stdev=97.35, samples=19 00:34:02.741 iops : min= 545, max= 640, avg=602.95, stdev=24.33, samples=19 00:34:02.741 lat (msec) : 20=0.43%, 50=99.57% 00:34:02.741 cpu : usr=98.80%, sys=0.82%, ctx=12, majf=0, minf=9 00:34:02.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 filename2: (groupid=0, jobs=1): err= 0: pid=3371372: Wed Dec 11 15:14:54 2024 00:34:02.741 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.741 slat (nsec): min=6989, max=90049, avg=36644.43, stdev=16661.32 00:34:02.741 clat (usec): min=11458, max=29351, avg=26044.93, stdev=1413.23 00:34:02.741 lat (usec): min=11499, max=29381, avg=26081.57, stdev=1415.29 00:34:02.741 clat percentiles (usec): 00:34:02.741 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25297], 20.00th=[25560], 00:34:02.741 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.741 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27657], 95.00th=[27919], 00:34:02.741 | 99.00th=[28705], 99.50th=[28705], 99.90th=[29230], 99.95th=[29230], 00:34:02.741 | 99.99th=[29230] 00:34:02.741 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.741 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.741 lat (msec) : 20=1.04%, 50=98.96% 00:34:02.741 cpu : usr=98.83%, sys=0.79%, ctx=14, majf=0, minf=9 00:34:02.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 filename2: (groupid=0, jobs=1): err= 0: pid=3371373: Wed Dec 11 15:14:54 2024 00:34:02.741 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10009msec) 00:34:02.741 slat (usec): min=6, max=127, avg=24.24, stdev=15.64 00:34:02.741 clat (usec): min=17233, max=38563, avg=26258.08, stdev=1546.01 00:34:02.741 lat (usec): min=17244, max=38584, avg=26282.31, stdev=1547.28 00:34:02.741 clat percentiles (usec): 00:34:02.741 | 1.00th=[18744], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.741 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.741 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27919], 95.00th=[28181], 00:34:02.741 | 99.00th=[29230], 99.50th=[34866], 99.90th=[38536], 99.95th=[38536], 00:34:02.741 | 99.99th=[38536] 00:34:02.741 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2417.74, stdev=84.51, samples=19 00:34:02.741 iops : min= 574, max= 640, avg=604.32, stdev=21.18, samples=19 00:34:02.741 lat (msec) : 20=1.22%, 50=98.78% 00:34:02.741 cpu : usr=98.52%, sys=0.94%, ctx=88, majf=0, minf=9 00:34:02.741 IO depths : 1=5.5%, 2=11.7%, 4=24.6%, 8=51.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 filename2: (groupid=0, jobs=1): err= 0: pid=3371374: Wed Dec 11 15:14:54 2024 00:34:02.741 read: IOPS=606, BW=2424KiB/s (2483kB/s)(23.7MiB/10005msec) 00:34:02.741 slat (nsec): min=7838, max=75113, avg=35081.46, stdev=12289.91 00:34:02.741 clat (usec): min=11529, max=29332, avg=26113.27, stdev=1428.08 00:34:02.741 lat (usec): min=11546, max=29370, avg=26148.35, stdev=1428.33 00:34:02.741 clat percentiles (usec): 00:34:02.741 | 1.00th=[19268], 5.00th=[25297], 10.00th=[25560], 20.00th=[25560], 00:34:02.741 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.741 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.741 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:34:02.741 | 99.99th=[29230] 00:34:02.741 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2424.42, stdev=99.71, samples=19 00:34:02.741 iops : min= 574, max= 672, avg=606.00, stdev=24.94, samples=19 00:34:02.741 lat (msec) : 20=1.06%, 50=98.94% 00:34:02.741 cpu : usr=98.51%, sys=0.95%, ctx=77, majf=0, minf=9 00:34:02.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 filename2: (groupid=0, jobs=1): err= 0: pid=3371375: Wed Dec 11 15:14:54 2024 00:34:02.741 read: IOPS=604, BW=2418KiB/s (2477kB/s)(23.6MiB/10003msec) 00:34:02.741 slat (usec): min=6, max=104, avg=45.06, stdev=17.88 00:34:02.741 clat (usec): min=3588, max=48740, avg=26033.51, stdev=1931.24 00:34:02.741 lat (usec): min=3595, max=48780, avg=26078.57, stdev=1932.84 00:34:02.741 clat percentiles (usec): 00:34:02.741 | 1.00th=[23987], 5.00th=[25035], 10.00th=[25297], 20.00th=[25560], 00:34:02.741 | 30.00th=[25560], 40.00th=[25822], 50.00th=[25822], 60.00th=[26084], 00:34:02.741 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27395], 95.00th=[27919], 00:34:02.741 | 99.00th=[28705], 99.50th=[28967], 99.90th=[48497], 99.95th=[48497], 00:34:02.741 | 99.99th=[48497] 00:34:02.741 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2404.47, stdev=67.89, samples=19 00:34:02.741 iops : min= 576, max= 640, avg=601.00, stdev=16.93, samples=19 00:34:02.741 lat (msec) : 4=0.26%, 20=0.46%, 50=99.27% 00:34:02.741 cpu : usr=98.96%, sys=0.65%, ctx=35, majf=0, minf=9 00:34:02.741 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 filename2: (groupid=0, jobs=1): err= 0: pid=3371376: Wed Dec 11 15:14:54 2024 00:34:02.741 read: IOPS=604, BW=2418KiB/s (2476kB/s)(23.6MiB/10006msec) 00:34:02.741 slat (nsec): min=6728, max=78447, avg=26517.30, stdev=16064.83 00:34:02.741 clat (usec): min=17247, max=34812, avg=26213.47, stdev=1110.26 00:34:02.741 lat (usec): min=17280, max=34822, avg=26239.99, stdev=1110.28 00:34:02.741 clat percentiles (usec): 00:34:02.741 | 1.00th=[23987], 5.00th=[25297], 10.00th=[25560], 20.00th=[25822], 00:34:02.741 | 30.00th=[25822], 40.00th=[25822], 50.00th=[26084], 60.00th=[26084], 00:34:02.741 | 70.00th=[26346], 80.00th=[26870], 90.00th=[27657], 95.00th=[28181], 00:34:02.741 | 99.00th=[28705], 99.50th=[28705], 99.90th=[34341], 99.95th=[34866], 00:34:02.741 | 99.99th=[34866] 00:34:02.741 bw ( KiB/s): min= 2304, max= 2560, per=4.17%, avg=2418.00, stdev=84.13, samples=19 00:34:02.741 iops : min= 576, max= 640, avg=604.42, stdev=21.02, samples=19 00:34:02.741 lat (msec) : 20=0.45%, 50=99.55% 00:34:02.741 cpu : usr=98.70%, sys=0.89%, ctx=22, majf=0, minf=9 00:34:02.741 IO depths : 1=5.9%, 2=12.0%, 4=24.8%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:02.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.741 issued rwts: total=6048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.741 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:02.741 00:34:02.741 Run status group 0 (all jobs): 00:34:02.741 READ: bw=56.6MiB/s (59.4MB/s), 2404KiB/s-2495KiB/s (2462kB/s-2555kB/s), io=569MiB (596MB), run=10001-10042msec 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.741 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 bdev_null0 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 [2024-12-11 15:14:54.830917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 bdev_null1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.742 { 00:34:02.742 "params": { 00:34:02.742 "name": "Nvme$subsystem", 00:34:02.742 "trtype": "$TEST_TRANSPORT", 00:34:02.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.742 "adrfam": "ipv4", 00:34:02.742 "trsvcid": "$NVMF_PORT", 00:34:02.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.742 "hdgst": ${hdgst:-false}, 00:34:02.742 "ddgst": ${ddgst:-false} 00:34:02.742 }, 00:34:02.742 "method": "bdev_nvme_attach_controller" 00:34:02.742 } 00:34:02.742 EOF 00:34:02.742 )") 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.742 { 00:34:02.742 "params": { 00:34:02.742 "name": "Nvme$subsystem", 00:34:02.742 "trtype": "$TEST_TRANSPORT", 00:34:02.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.742 "adrfam": "ipv4", 00:34:02.742 "trsvcid": "$NVMF_PORT", 00:34:02.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.742 "hdgst": ${hdgst:-false}, 00:34:02.742 "ddgst": ${ddgst:-false} 00:34:02.742 }, 00:34:02.742 "method": "bdev_nvme_attach_controller" 00:34:02.742 } 00:34:02.742 EOF 00:34:02.742 )") 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.742 "params": { 00:34:02.742 "name": "Nvme0", 00:34:02.742 "trtype": "tcp", 00:34:02.742 "traddr": "10.0.0.2", 00:34:02.742 "adrfam": "ipv4", 00:34:02.742 "trsvcid": "4420", 00:34:02.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.742 "hdgst": false, 00:34:02.742 "ddgst": false 00:34:02.742 }, 00:34:02.742 "method": "bdev_nvme_attach_controller" 00:34:02.742 },{ 00:34:02.742 "params": { 00:34:02.742 "name": "Nvme1", 00:34:02.742 "trtype": "tcp", 00:34:02.742 "traddr": "10.0.0.2", 00:34:02.742 "adrfam": "ipv4", 00:34:02.742 "trsvcid": "4420", 00:34:02.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.742 "hdgst": false, 00:34:02.742 "ddgst": false 00:34:02.742 }, 00:34:02.742 "method": "bdev_nvme_attach_controller" 00:34:02.742 }' 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:02.742 15:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.743 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:02.743 ... 00:34:02.743 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:02.743 ... 00:34:02.743 fio-3.35 00:34:02.743 Starting 4 threads 00:34:09.311 00:34:09.311 filename0: (groupid=0, jobs=1): err= 0: pid=3373413: Wed Dec 11 15:15:01 2024 00:34:09.311 read: IOPS=2626, BW=20.5MiB/s (21.5MB/s)(103MiB/5002msec) 00:34:09.311 slat (nsec): min=6371, max=75529, avg=17227.67, stdev=9990.45 00:34:09.311 clat (usec): min=488, max=5537, avg=2989.51, stdev=382.19 00:34:09.311 lat (usec): min=508, max=5568, avg=3006.74, stdev=383.43 00:34:09.311 clat percentiles (usec): 00:34:09.311 | 1.00th=[ 1909], 5.00th=[ 2278], 10.00th=[ 2507], 20.00th=[ 2737], 00:34:09.311 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:09.311 | 70.00th=[ 3130], 80.00th=[ 3228], 90.00th=[ 3326], 95.00th=[ 3458], 00:34:09.311 | 99.00th=[ 4047], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[ 5342], 00:34:09.311 | 99.99th=[ 5538] 00:34:09.312 bw ( KiB/s): min=20064, max=21531, per=25.61%, avg=20891.89, stdev=419.70, samples=9 00:34:09.312 iops : min= 2508, max= 2691, avg=2611.44, stdev=52.39, samples=9 00:34:09.312 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:34:09.312 lat (msec) : 2=1.49%, 4=97.31%, 10=1.17% 00:34:09.312 cpu : usr=97.28%, sys=2.34%, ctx=9, majf=0, minf=9 00:34:09.312 IO depths : 1=0.8%, 2=10.9%, 4=61.0%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 issued rwts: total=13137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:09.312 filename0: (groupid=0, jobs=1): err= 0: pid=3373414: Wed Dec 11 15:15:01 2024 00:34:09.312 read: IOPS=2505, BW=19.6MiB/s (20.5MB/s)(97.9MiB/5001msec) 00:34:09.312 slat (nsec): min=6261, max=62662, avg=14968.97, stdev=11056.82 00:34:09.312 clat (usec): min=957, max=5716, avg=3146.45, stdev=381.70 00:34:09.312 lat (usec): min=969, max=5764, avg=3161.42, stdev=381.67 00:34:09.312 clat percentiles (usec): 00:34:09.312 | 1.00th=[ 2180], 5.00th=[ 2606], 10.00th=[ 2835], 20.00th=[ 2999], 00:34:09.312 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:34:09.312 | 70.00th=[ 3228], 80.00th=[ 3326], 90.00th=[ 3490], 95.00th=[ 3785], 00:34:09.312 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5604], 00:34:09.312 | 99.99th=[ 5735] 00:34:09.312 bw ( KiB/s): min=19696, max=20656, per=24.65%, avg=20107.44, stdev=342.76, samples=9 00:34:09.312 iops : min= 2462, max= 2582, avg=2513.33, stdev=42.88, samples=9 00:34:09.312 lat (usec) : 1000=0.01% 00:34:09.312 lat (msec) : 2=0.59%, 4=96.58%, 10=2.82% 00:34:09.312 cpu : usr=96.84%, sys=2.80%, ctx=9, majf=0, minf=9 00:34:09.312 IO depths : 1=0.2%, 2=5.0%, 4=67.9%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 issued rwts: total=12532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:09.312 filename1: (groupid=0, jobs=1): err= 0: pid=3373415: Wed Dec 11 15:15:01 2024 00:34:09.312 read: IOPS=2540, BW=19.8MiB/s (20.8MB/s)(99.3MiB/5003msec) 00:34:09.312 slat (nsec): min=6255, max=64076, avg=14191.38, stdev=10502.20 00:34:09.312 clat (usec): min=618, max=5853, avg=3104.91, stdev=367.63 00:34:09.312 lat (usec): min=630, max=5860, avg=3119.11, stdev=368.06 00:34:09.312 clat percentiles (usec): 00:34:09.312 | 1.00th=[ 2024], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2966], 00:34:09.312 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:34:09.312 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3425], 95.00th=[ 3687], 00:34:09.312 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[ 5342], 00:34:09.312 | 99.99th=[ 5866] 00:34:09.312 bw ( KiB/s): min=19952, max=20832, per=25.00%, avg=20394.67, stdev=257.99, samples=9 00:34:09.312 iops : min= 2494, max= 2604, avg=2549.33, stdev=32.25, samples=9 00:34:09.312 lat (usec) : 750=0.01%, 1000=0.02% 00:34:09.312 lat (msec) : 2=0.88%, 4=96.97%, 10=2.12% 00:34:09.312 cpu : usr=97.16%, sys=2.50%, ctx=10, majf=0, minf=9 00:34:09.312 IO depths : 1=0.6%, 2=5.5%, 4=66.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 issued rwts: total=12710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:09.312 filename1: (groupid=0, jobs=1): err= 0: pid=3373416: Wed Dec 11 15:15:01 2024 00:34:09.312 read: IOPS=2527, BW=19.7MiB/s (20.7MB/s)(98.8MiB/5001msec) 00:34:09.312 slat (nsec): min=5945, max=64060, avg=15004.72, stdev=11034.67 00:34:09.312 clat (usec): min=665, max=5684, avg=3118.52, stdev=371.46 00:34:09.312 lat (usec): min=684, max=5712, avg=3133.53, stdev=371.53 00:34:09.312 clat percentiles (usec): 00:34:09.312 | 1.00th=[ 2089], 5.00th=[ 2540], 10.00th=[ 2769], 20.00th=[ 2966], 00:34:09.312 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3130], 00:34:09.312 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3458], 95.00th=[ 3752], 00:34:09.312 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5342], 99.95th=[ 5473], 00:34:09.312 | 99.99th=[ 5669] 00:34:09.312 bw ( KiB/s): min=19776, max=20720, per=24.86%, avg=20282.67, stdev=303.05, samples=9 00:34:09.312 iops : min= 2472, max= 2590, avg=2535.33, stdev=37.88, samples=9 00:34:09.312 lat (usec) : 750=0.02%, 1000=0.02% 00:34:09.312 lat (msec) : 2=0.70%, 4=97.14%, 10=2.13% 00:34:09.312 cpu : usr=97.00%, sys=2.68%, ctx=8, majf=0, minf=9 00:34:09.312 IO depths : 1=0.3%, 2=5.9%, 4=66.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:09.312 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:09.312 issued rwts: total=12640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:09.312 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:09.312 00:34:09.312 Run status group 0 (all jobs): 00:34:09.312 READ: bw=79.7MiB/s (83.5MB/s), 19.6MiB/s-20.5MiB/s (20.5MB/s-21.5MB/s), io=399MiB (418MB), run=5001-5003msec 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 00:34:09.312 real 0m24.644s 00:34:09.312 user 4m53.023s 00:34:09.312 sys 0m4.469s 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 ************************************ 00:34:09.312 END TEST fio_dif_rand_params 00:34:09.312 ************************************ 00:34:09.312 15:15:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:09.312 15:15:01 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:09.312 15:15:01 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 ************************************ 00:34:09.312 START TEST fio_dif_digest 00:34:09.312 ************************************ 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 bdev_null0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.312 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:09.312 [2024-12-11 15:15:01.528803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:09.313 { 00:34:09.313 "params": { 00:34:09.313 "name": "Nvme$subsystem", 00:34:09.313 "trtype": "$TEST_TRANSPORT", 00:34:09.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:09.313 "adrfam": "ipv4", 00:34:09.313 "trsvcid": "$NVMF_PORT", 00:34:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:09.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:09.313 "hdgst": ${hdgst:-false}, 00:34:09.313 "ddgst": ${ddgst:-false} 00:34:09.313 }, 00:34:09.313 "method": "bdev_nvme_attach_controller" 00:34:09.313 } 00:34:09.313 EOF 00:34:09.313 )") 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:09.313 "params": { 00:34:09.313 "name": "Nvme0", 00:34:09.313 "trtype": "tcp", 00:34:09.313 "traddr": "10.0.0.2", 00:34:09.313 "adrfam": "ipv4", 00:34:09.313 "trsvcid": "4420", 00:34:09.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:09.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:09.313 "hdgst": true, 00:34:09.313 "ddgst": true 00:34:09.313 }, 00:34:09.313 "method": "bdev_nvme_attach_controller" 00:34:09.313 }' 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:09.313 15:15:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:09.313 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:09.313 ... 00:34:09.313 fio-3.35 00:34:09.313 Starting 3 threads 00:34:21.519 00:34:21.519 filename0: (groupid=0, jobs=1): err= 0: pid=3374694: Wed Dec 11 15:15:12 2024 00:34:21.519 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(371MiB/10048msec) 00:34:21.519 slat (nsec): min=6448, max=45240, avg=16824.51, stdev=7121.81 00:34:21.519 clat (usec): min=6453, max=52991, avg=10111.46, stdev=1300.54 00:34:21.519 lat (usec): min=6464, max=53014, avg=10128.28, stdev=1300.74 00:34:21.519 clat percentiles (usec): 00:34:21.519 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:21.519 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:34:21.519 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11207], 00:34:21.519 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13173], 99.95th=[48497], 00:34:21.519 | 99.99th=[53216] 00:34:21.519 bw ( KiB/s): min=36608, max=39680, per=36.16%, avg=38003.20, stdev=745.09, samples=20 00:34:21.519 iops : min= 286, max= 310, avg=296.90, stdev= 5.82, samples=20 00:34:21.519 lat (msec) : 10=45.04%, 20=54.90%, 50=0.03%, 100=0.03% 00:34:21.519 cpu : usr=96.26%, sys=3.39%, ctx=19, majf=0, minf=98 00:34:21.519 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 issued rwts: total=2971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.519 filename0: (groupid=0, jobs=1): err= 0: pid=3374695: Wed Dec 11 15:15:12 2024 00:34:21.519 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(338MiB/10004msec) 00:34:21.519 slat (nsec): min=6655, max=57457, avg=17783.52, stdev=7062.85 00:34:21.519 clat (usec): min=5283, max=13902, avg=11066.70, stdev=847.07 00:34:21.519 lat (usec): min=5305, max=13930, avg=11084.48, stdev=847.48 00:34:21.519 clat percentiles (usec): 00:34:21.519 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:34:21.519 | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:34:21.519 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:34:21.519 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13698], 99.95th=[13829], 00:34:21.519 | 99.99th=[13960] 00:34:21.519 bw ( KiB/s): min=32768, max=36864, per=32.97%, avg=34654.32, stdev=823.86, samples=19 00:34:21.519 iops : min= 256, max= 288, avg=270.74, stdev= 6.44, samples=19 00:34:21.519 lat (msec) : 10=8.90%, 20=91.10% 00:34:21.519 cpu : usr=96.36%, sys=3.29%, ctx=34, majf=0, minf=104 00:34:21.519 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 issued rwts: total=2707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.519 filename0: (groupid=0, jobs=1): err= 0: pid=3374696: Wed Dec 11 15:15:12 2024 00:34:21.519 read: IOPS=256, BW=32.0MiB/s (33.6MB/s)(322MiB/10045msec) 00:34:21.519 slat (nsec): min=6624, max=53463, avg=20255.93, stdev=7236.09 00:34:21.519 clat (usec): min=8240, max=54725, avg=11678.14, stdev=1943.75 00:34:21.519 lat (usec): min=8268, max=54762, avg=11698.40, stdev=1943.94 00:34:21.519 clat percentiles (usec): 00:34:21.519 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:34:21.519 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11731], 00:34:21.519 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12780], 95.00th=[13173], 00:34:21.519 | 99.00th=[13829], 99.50th=[14091], 99.90th=[53216], 99.95th=[53216], 00:34:21.519 | 99.99th=[54789] 00:34:21.519 bw ( KiB/s): min=31232, max=33792, per=31.30%, avg=32896.00, stdev=651.35, samples=20 00:34:21.519 iops : min= 244, max= 264, avg=257.00, stdev= 5.09, samples=20 00:34:21.519 lat (msec) : 10=1.91%, 20=97.90%, 50=0.08%, 100=0.12% 00:34:21.519 cpu : usr=96.71%, sys=2.95%, ctx=21, majf=0, minf=111 00:34:21.519 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.519 issued rwts: total=2572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.519 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:21.519 00:34:21.519 Run status group 0 (all jobs): 00:34:21.519 READ: bw=103MiB/s (108MB/s), 32.0MiB/s-37.0MiB/s (33.6MB/s-38.8MB/s), io=1031MiB (1081MB), run=10004-10048msec 00:34:21.519 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.520 00:34:21.520 real 0m11.291s 00:34:21.520 user 0m35.726s 00:34:21.520 sys 0m1.336s 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.520 15:15:12 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:21.520 ************************************ 00:34:21.520 END TEST fio_dif_digest 00:34:21.520 ************************************ 00:34:21.520 15:15:12 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:21.520 15:15:12 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.520 rmmod nvme_tcp 00:34:21.520 rmmod nvme_fabrics 00:34:21.520 rmmod nvme_keyring 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3365966 ']' 00:34:21.520 15:15:12 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3365966 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3365966 ']' 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3365966 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3365966 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3365966' 00:34:21.520 killing process with pid 3365966 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3365966 00:34:21.520 15:15:12 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3365966 00:34:21.520 15:15:13 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:34:21.520 15:15:13 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:34:22.897 Waiting for block devices as requested 00:34:22.897 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:22.897 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:23.156 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:23.156 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:23.156 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:23.414 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:23.414 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:23.414 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:23.673 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:23.673 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:23.673 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:23.673 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:23.931 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:23.931 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:23.931 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:24.190 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:24.190 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.190 15:15:17 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.190 15:15:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:24.190 15:15:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.726 15:15:19 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:26.726 00:34:26.726 real 1m14.573s 00:34:26.726 user 7m11.858s 00:34:26.726 sys 0m19.184s 00:34:26.726 15:15:19 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:26.726 15:15:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:26.726 ************************************ 00:34:26.726 END TEST nvmf_dif 00:34:26.726 ************************************ 00:34:26.726 15:15:19 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:26.726 15:15:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.726 15:15:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.726 15:15:19 -- common/autotest_common.sh@10 -- # set +x 00:34:26.726 ************************************ 00:34:26.726 START TEST nvmf_abort_qd_sizes 00:34:26.726 ************************************ 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:26.726 * Looking for test storage... 00:34:26.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.726 --rc genhtml_branch_coverage=1 00:34:26.726 --rc genhtml_function_coverage=1 00:34:26.726 --rc genhtml_legend=1 00:34:26.726 --rc geninfo_all_blocks=1 00:34:26.726 --rc geninfo_unexecuted_blocks=1 00:34:26.726 00:34:26.726 ' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.726 --rc genhtml_branch_coverage=1 00:34:26.726 --rc genhtml_function_coverage=1 00:34:26.726 --rc genhtml_legend=1 00:34:26.726 --rc geninfo_all_blocks=1 00:34:26.726 --rc geninfo_unexecuted_blocks=1 00:34:26.726 00:34:26.726 ' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.726 --rc genhtml_branch_coverage=1 00:34:26.726 --rc genhtml_function_coverage=1 00:34:26.726 --rc genhtml_legend=1 00:34:26.726 --rc geninfo_all_blocks=1 00:34:26.726 --rc geninfo_unexecuted_blocks=1 00:34:26.726 00:34:26.726 ' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:26.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:26.726 --rc genhtml_branch_coverage=1 00:34:26.726 --rc genhtml_function_coverage=1 00:34:26.726 --rc genhtml_legend=1 00:34:26.726 --rc geninfo_all_blocks=1 00:34:26.726 --rc geninfo_unexecuted_blocks=1 00:34:26.726 00:34:26.726 ' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:26.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:26.726 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:34:26.727 15:15:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:34:31.998 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:31.999 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:31.999 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:31.999 Found net devices under 0000:86:00.0: cvl_0_0 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:31.999 Found net devices under 0000:86:00.1: cvl_0_1 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:31.999 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.258 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:34:32.516 00:34:32.516 --- 10.0.0.2 ping statistics --- 00:34:32.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.516 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:34:32.516 00:34:32.516 --- 10.0.0.1 ping statistics --- 00:34:32.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.516 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:32.516 15:15:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:34:35.050 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.050 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:35.309 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:36.245 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:36.245 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3383008 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3383008 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3383008 ']' 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.246 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:36.504 [2024-12-11 15:15:29.301773] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:34:36.505 [2024-12-11 15:15:29.301814] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:36.505 [2024-12-11 15:15:29.381702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:36.505 [2024-12-11 15:15:29.424021] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:36.505 [2024-12-11 15:15:29.424057] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:36.505 [2024-12-11 15:15:29.424064] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:36.505 [2024-12-11 15:15:29.424069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:36.505 [2024-12-11 15:15:29.424074] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:36.505 [2024-12-11 15:15:29.425665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:36.505 [2024-12-11 15:15:29.425778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:36.505 [2024-12-11 15:15:29.425902] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.505 [2024-12-11 15:15:29.425903] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:34:36.505 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.505 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:36.505 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.505 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.505 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:36.763 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.764 15:15:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:36.764 ************************************ 00:34:36.764 START TEST spdk_target_abort 00:34:36.764 ************************************ 00:34:36.764 15:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:36.764 15:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:36.764 15:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:36.764 15:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.764 15:15:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.046 spdk_targetn1 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.046 [2024-12-11 15:15:32.444288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.046 [2024-12-11 15:15:32.488676] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:40.046 15:15:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:43.330 Initializing NVMe Controllers 00:34:43.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:43.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:43.330 Initialization complete. Launching workers. 00:34:43.330 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14617, failed: 0 00:34:43.330 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1387, failed to submit 13230 00:34:43.330 success 707, unsuccessful 680, failed 0 00:34:43.330 15:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:43.330 15:15:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:46.615 Initializing NVMe Controllers 00:34:46.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:46.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:46.615 Initialization complete. Launching workers. 00:34:46.615 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8576, failed: 0 00:34:46.615 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1261, failed to submit 7315 00:34:46.616 success 339, unsuccessful 922, failed 0 00:34:46.616 15:15:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:46.616 15:15:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:49.901 Initializing NVMe Controllers 00:34:49.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:49.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:49.902 Initialization complete. Launching workers. 00:34:49.902 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37832, failed: 0 00:34:49.902 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2719, failed to submit 35113 00:34:49.902 success 578, unsuccessful 2141, failed 0 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.902 15:15:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3383008 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3383008 ']' 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3383008 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383008 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383008' 00:34:50.838 killing process with pid 3383008 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3383008 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3383008 00:34:50.838 00:34:50.838 real 0m14.223s 00:34:50.838 user 0m54.240s 00:34:50.838 sys 0m2.588s 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.838 15:15:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:50.838 ************************************ 00:34:50.838 END TEST spdk_target_abort 00:34:50.838 ************************************ 00:34:50.838 15:15:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:50.838 15:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:50.838 15:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:50.838 15:15:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:51.097 ************************************ 00:34:51.097 START TEST kernel_target_abort 00:34:51.097 ************************************ 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.097 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:51.098 15:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:34:53.633 Waiting for block devices as requested 00:34:53.633 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:53.897 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.897 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.897 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:54.188 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:54.188 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:54.188 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:54.508 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:54.508 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:54.508 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:54.508 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:54.773 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:54.773 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:54.773 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:54.773 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:55.033 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:55.033 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:34:55.033 No valid GPT data, bailing 00:34:55.033 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:55.292 00:34:55.292 Discovery Log Number of Records 2, Generation counter 2 00:34:55.292 =====Discovery Log Entry 0====== 00:34:55.292 trtype: tcp 00:34:55.292 adrfam: ipv4 00:34:55.292 subtype: current discovery subsystem 00:34:55.292 treq: not specified, sq flow control disable supported 00:34:55.292 portid: 1 00:34:55.292 trsvcid: 4420 00:34:55.292 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:55.292 traddr: 10.0.0.1 00:34:55.292 eflags: none 00:34:55.292 sectype: none 00:34:55.292 =====Discovery Log Entry 1====== 00:34:55.292 trtype: tcp 00:34:55.292 adrfam: ipv4 00:34:55.292 subtype: nvme subsystem 00:34:55.292 treq: not specified, sq flow control disable supported 00:34:55.292 portid: 1 00:34:55.292 trsvcid: 4420 00:34:55.292 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:55.292 traddr: 10.0.0.1 00:34:55.292 eflags: none 00:34:55.292 sectype: none 00:34:55.292 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:55.293 15:15:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:58.581 Initializing NVMe Controllers 00:34:58.581 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:58.581 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:58.581 Initialization complete. Launching workers. 00:34:58.581 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91580, failed: 0 00:34:58.581 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 91580, failed to submit 0 00:34:58.581 success 0, unsuccessful 91580, failed 0 00:34:58.581 15:15:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:58.581 15:15:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:01.870 Initializing NVMe Controllers 00:35:01.870 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:01.870 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:01.870 Initialization complete. Launching workers. 00:35:01.870 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145467, failed: 0 00:35:01.870 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36442, failed to submit 109025 00:35:01.870 success 0, unsuccessful 36442, failed 0 00:35:01.870 15:15:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:01.870 15:15:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:05.171 Initializing NVMe Controllers 00:35:05.171 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:05.171 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:05.171 Initialization complete. Launching workers. 00:35:05.171 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136832, failed: 0 00:35:05.171 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34278, failed to submit 102554 00:35:05.171 success 0, unsuccessful 34278, failed 0 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:05.172 15:15:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:35:07.709 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:07.709 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:08.646 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:08.646 00:35:08.646 real 0m17.561s 00:35:08.646 user 0m9.157s 00:35:08.646 sys 0m5.037s 00:35:08.646 15:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.646 15:16:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:08.646 ************************************ 00:35:08.646 END TEST kernel_target_abort 00:35:08.646 ************************************ 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:08.646 rmmod nvme_tcp 00:35:08.646 rmmod nvme_fabrics 00:35:08.646 rmmod nvme_keyring 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3383008 ']' 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3383008 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3383008 ']' 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3383008 00:35:08.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (3383008) - No such process 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3383008 is not found' 00:35:08.646 Process with pid 3383008 is not found 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:08.646 15:16:01 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:35:11.182 Waiting for block devices as requested 00:35:11.441 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:11.441 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:11.700 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:11.700 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:11.700 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:11.700 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:11.960 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:11.960 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:11.960 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:12.232 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:12.232 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:12.232 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:12.491 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:12.491 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:12.491 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:12.491 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:12.750 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:12.750 15:16:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:15.287 15:16:07 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:15.287 00:35:15.287 real 0m48.438s 00:35:15.287 user 1m7.713s 00:35:15.287 sys 0m16.316s 00:35:15.287 15:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:15.287 15:16:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:15.287 ************************************ 00:35:15.287 END TEST nvmf_abort_qd_sizes 00:35:15.287 ************************************ 00:35:15.287 15:16:07 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:35:15.287 15:16:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:15.287 15:16:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:15.287 15:16:07 -- common/autotest_common.sh@10 -- # set +x 00:35:15.287 ************************************ 00:35:15.287 START TEST keyring_file 00:35:15.287 ************************************ 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:35:15.287 * Looking for test storage... 00:35:15.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:15.287 15:16:07 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.287 --rc genhtml_branch_coverage=1 00:35:15.287 --rc genhtml_function_coverage=1 00:35:15.287 --rc genhtml_legend=1 00:35:15.287 --rc geninfo_all_blocks=1 00:35:15.287 --rc geninfo_unexecuted_blocks=1 00:35:15.287 00:35:15.287 ' 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.287 --rc genhtml_branch_coverage=1 00:35:15.287 --rc genhtml_function_coverage=1 00:35:15.287 --rc genhtml_legend=1 00:35:15.287 --rc geninfo_all_blocks=1 00:35:15.287 --rc geninfo_unexecuted_blocks=1 00:35:15.287 00:35:15.287 ' 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.287 --rc genhtml_branch_coverage=1 00:35:15.287 --rc genhtml_function_coverage=1 00:35:15.287 --rc genhtml_legend=1 00:35:15.287 --rc geninfo_all_blocks=1 00:35:15.287 --rc geninfo_unexecuted_blocks=1 00:35:15.287 00:35:15.287 ' 00:35:15.287 15:16:07 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:15.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:15.287 --rc genhtml_branch_coverage=1 00:35:15.287 --rc genhtml_function_coverage=1 00:35:15.287 --rc genhtml_legend=1 00:35:15.287 --rc geninfo_all_blocks=1 00:35:15.287 --rc geninfo_unexecuted_blocks=1 00:35:15.287 00:35:15.287 ' 00:35:15.287 15:16:07 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:35:15.287 15:16:07 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:15.287 15:16:07 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:15.287 15:16:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:15.287 15:16:08 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:15.287 15:16:08 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:15.287 15:16:08 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:15.287 15:16:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.287 15:16:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.287 15:16:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.287 15:16:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:15.287 15:16:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:15.287 15:16:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:15.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5p8BZiIm6P 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5p8BZiIm6P 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5p8BZiIm6P 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5p8BZiIm6P 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.P21bC5VhWp 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:15.288 15:16:08 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P21bC5VhWp 00:35:15.288 15:16:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.P21bC5VhWp 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.P21bC5VhWp 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=3391867 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:35:15.288 15:16:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3391867 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3391867 ']' 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.288 15:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.288 [2024-12-11 15:16:08.176796] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:35:15.288 [2024-12-11 15:16:08.176847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391867 ] 00:35:15.288 [2024-12-11 15:16:08.253116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.288 [2024-12-11 15:16:08.293828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:15.547 15:16:08 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.547 [2024-12-11 15:16:08.504116] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:15.547 null0 00:35:15.547 [2024-12-11 15:16:08.536165] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:15.547 [2024-12-11 15:16:08.536519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.547 15:16:08 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.547 [2024-12-11 15:16:08.564229] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:15.547 request: 00:35:15.547 { 00:35:15.547 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.547 "secure_channel": false, 00:35:15.547 "listen_address": { 00:35:15.547 "trtype": "tcp", 00:35:15.547 "traddr": "127.0.0.1", 00:35:15.547 "trsvcid": "4420" 00:35:15.547 }, 00:35:15.547 "method": "nvmf_subsystem_add_listener", 00:35:15.547 "req_id": 1 00:35:15.547 } 00:35:15.547 Got JSON-RPC error response 00:35:15.547 response: 00:35:15.547 { 00:35:15.547 "code": -32602, 00:35:15.547 "message": "Invalid parameters" 00:35:15.547 } 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.547 15:16:08 keyring_file -- keyring/file.sh@47 -- # bperfpid=3391877 00:35:15.547 15:16:08 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3391877 /var/tmp/bperf.sock 00:35:15.547 15:16:08 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3391877 ']' 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:15.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.547 15:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:15.807 [2024-12-11 15:16:08.618235] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:35:15.807 [2024-12-11 15:16:08.618278] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3391877 ] 00:35:15.807 [2024-12-11 15:16:08.693475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:15.807 [2024-12-11 15:16:08.734880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.807 15:16:08 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.807 15:16:08 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:15.807 15:16:08 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:15.807 15:16:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:16.065 15:16:09 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P21bC5VhWp 00:35:16.065 15:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P21bC5VhWp 00:35:16.324 15:16:09 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:16.324 15:16:09 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:16.324 15:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.324 15:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.324 15:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.583 15:16:09 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5p8BZiIm6P == \/\t\m\p\/\t\m\p\.\5\p\8\B\Z\i\I\m\6\P ]] 00:35:16.583 15:16:09 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:16.583 15:16:09 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.583 15:16:09 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.P21bC5VhWp == \/\t\m\p\/\t\m\p\.\P\2\1\b\C\5\V\h\W\p ]] 00:35:16.583 15:16:09 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:16.583 15:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.842 15:16:09 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:16.842 15:16:09 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:16.842 15:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:16.842 15:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:16.842 15:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:16.842 15:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.842 15:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:17.101 15:16:10 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:17.101 15:16:10 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.101 15:16:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:17.360 [2024-12-11 15:16:10.190188] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:17.360 nvme0n1 00:35:17.360 15:16:10 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:17.360 15:16:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:17.360 15:16:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:17.360 15:16:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.360 15:16:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:17.360 15:16:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.619 15:16:10 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:17.619 15:16:10 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:17.619 15:16:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:17.619 15:16:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:17.619 15:16:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:17.619 15:16:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:17.619 15:16:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:17.878 15:16:10 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:17.878 15:16:10 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:17.878 Running I/O for 1 seconds... 00:35:18.813 18883.00 IOPS, 73.76 MiB/s 00:35:18.813 Latency(us) 00:35:18.813 [2024-12-11T14:16:11.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:18.813 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:18.813 nvme0n1 : 1.00 18931.92 73.95 0.00 0.00 6748.76 2877.89 10086.85 00:35:18.813 [2024-12-11T14:16:11.861Z] =================================================================================================================== 00:35:18.813 [2024-12-11T14:16:11.861Z] Total : 18931.92 73.95 0.00 0.00 6748.76 2877.89 10086.85 00:35:18.813 { 00:35:18.813 "results": [ 00:35:18.813 { 00:35:18.813 "job": "nvme0n1", 00:35:18.813 "core_mask": "0x2", 00:35:18.813 "workload": "randrw", 00:35:18.813 "percentage": 50, 00:35:18.813 "status": "finished", 00:35:18.813 "queue_depth": 128, 00:35:18.813 "io_size": 4096, 00:35:18.813 "runtime": 1.004177, 00:35:18.813 "iops": 18931.921364460646, 00:35:18.813 "mibps": 73.9528178299244, 00:35:18.813 "io_failed": 0, 00:35:18.813 "io_timeout": 0, 00:35:18.813 "avg_latency_us": 6748.756175852424, 00:35:18.813 "min_latency_us": 2877.885217391304, 00:35:18.813 "max_latency_us": 10086.845217391305 00:35:18.813 } 00:35:18.813 ], 00:35:18.813 "core_count": 1 00:35:18.813 } 00:35:18.813 15:16:11 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:18.813 15:16:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:19.072 15:16:12 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:19.072 15:16:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:19.072 15:16:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.072 15:16:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.072 15:16:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.072 15:16:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.330 15:16:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:19.330 15:16:12 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:19.330 15:16:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:19.330 15:16:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.330 15:16:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.330 15:16:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.330 15:16:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.589 15:16:12 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:19.589 15:16:12 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.589 15:16:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.589 15:16:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:19.589 [2024-12-11 15:16:12.618336] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:19.589 [2024-12-11 15:16:12.619052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bee90 (107): Transport endpoint is not connected 00:35:19.589 [2024-12-11 15:16:12.620046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22bee90 (9): Bad file descriptor 00:35:19.589 [2024-12-11 15:16:12.621047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:19.589 [2024-12-11 15:16:12.621056] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:19.589 [2024-12-11 15:16:12.621068] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:19.589 [2024-12-11 15:16:12.621076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:19.589 request: 00:35:19.589 { 00:35:19.589 "name": "nvme0", 00:35:19.589 "trtype": "tcp", 00:35:19.589 "traddr": "127.0.0.1", 00:35:19.589 "adrfam": "ipv4", 00:35:19.589 "trsvcid": "4420", 00:35:19.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:19.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:19.589 "prchk_reftag": false, 00:35:19.589 "prchk_guard": false, 00:35:19.589 "hdgst": false, 00:35:19.589 "ddgst": false, 00:35:19.589 "psk": "key1", 00:35:19.589 "allow_unrecognized_csi": false, 00:35:19.589 "method": "bdev_nvme_attach_controller", 00:35:19.589 "req_id": 1 00:35:19.589 } 00:35:19.589 Got JSON-RPC error response 00:35:19.589 response: 00:35:19.589 { 00:35:19.589 "code": -5, 00:35:19.589 "message": "Input/output error" 00:35:19.589 } 00:35:19.848 15:16:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:19.848 15:16:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.848 15:16:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.848 15:16:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.848 15:16:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:19.848 15:16:12 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:19.848 15:16:12 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:19.848 15:16:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.107 15:16:13 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:20.107 15:16:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:20.107 15:16:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:20.364 15:16:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:20.364 15:16:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:20.622 15:16:13 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:20.622 15:16:13 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:20.622 15:16:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.622 15:16:13 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:20.622 15:16:13 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5p8BZiIm6P 00:35:20.622 15:16:13 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.622 15:16:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:20.622 15:16:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:20.881 [2024-12-11 15:16:13.818241] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5p8BZiIm6P': 0100660 00:35:20.881 [2024-12-11 15:16:13.818268] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:20.881 request: 00:35:20.881 { 00:35:20.881 "name": "key0", 00:35:20.881 "path": "/tmp/tmp.5p8BZiIm6P", 00:35:20.881 "method": "keyring_file_add_key", 00:35:20.881 "req_id": 1 00:35:20.881 } 00:35:20.881 Got JSON-RPC error response 00:35:20.881 response: 00:35:20.881 { 00:35:20.881 "code": -1, 00:35:20.881 "message": "Operation not permitted" 00:35:20.881 } 00:35:20.881 15:16:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:20.881 15:16:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:20.881 15:16:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:20.881 15:16:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:20.881 15:16:13 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5p8BZiIm6P 00:35:20.881 15:16:13 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:20.881 15:16:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5p8BZiIm6P 00:35:21.140 15:16:14 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5p8BZiIm6P 00:35:21.140 15:16:14 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:21.140 15:16:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:21.140 15:16:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:21.140 15:16:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:21.140 15:16:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:21.140 15:16:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:21.398 15:16:14 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:21.398 15:16:14 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.398 15:16:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.398 [2024-12-11 15:16:14.411811] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5p8BZiIm6P': No such file or directory 00:35:21.398 [2024-12-11 15:16:14.411830] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:21.398 [2024-12-11 15:16:14.411845] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:21.398 [2024-12-11 15:16:14.411852] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:21.398 [2024-12-11 15:16:14.411859] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:21.398 [2024-12-11 15:16:14.411866] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:21.398 request: 00:35:21.398 { 00:35:21.398 "name": "nvme0", 00:35:21.398 "trtype": "tcp", 00:35:21.398 "traddr": "127.0.0.1", 00:35:21.398 "adrfam": "ipv4", 00:35:21.398 "trsvcid": "4420", 00:35:21.398 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:21.398 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:21.398 "prchk_reftag": false, 00:35:21.398 "prchk_guard": false, 00:35:21.398 "hdgst": false, 00:35:21.398 "ddgst": false, 00:35:21.398 "psk": "key0", 00:35:21.398 "allow_unrecognized_csi": false, 00:35:21.398 "method": "bdev_nvme_attach_controller", 00:35:21.398 "req_id": 1 00:35:21.398 } 00:35:21.398 Got JSON-RPC error response 00:35:21.398 response: 00:35:21.398 { 00:35:21.398 "code": -19, 00:35:21.398 "message": "No such device" 00:35:21.398 } 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:21.398 15:16:14 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:21.398 15:16:14 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:21.399 15:16:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:21.657 15:16:14 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5wtfJNMZjO 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:21.657 15:16:14 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5wtfJNMZjO 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5wtfJNMZjO 00:35:21.657 15:16:14 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.5wtfJNMZjO 00:35:21.657 15:16:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5wtfJNMZjO 00:35:21.657 15:16:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5wtfJNMZjO 00:35:21.915 15:16:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:21.915 15:16:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:22.177 nvme0n1 00:35:22.177 15:16:15 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:22.177 15:16:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:22.177 15:16:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.177 15:16:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.177 15:16:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.177 15:16:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.438 15:16:15 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:22.438 15:16:15 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:22.438 15:16:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:22.697 15:16:15 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:22.697 15:16:15 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:22.697 15:16:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.697 15:16:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.697 15:16:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.955 15:16:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:22.955 15:16:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:22.955 15:16:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:22.955 15:16:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:22.955 15:16:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:22.955 15:16:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:22.955 15:16:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:22.955 15:16:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:22.955 15:16:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:22.956 15:16:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:23.215 15:16:16 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:23.215 15:16:16 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:23.215 15:16:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:23.474 15:16:16 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:23.474 15:16:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5wtfJNMZjO 00:35:23.474 15:16:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5wtfJNMZjO 00:35:23.732 15:16:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P21bC5VhWp 00:35:23.732 15:16:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P21bC5VhWp 00:35:23.732 15:16:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:23.732 15:16:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:23.991 nvme0n1 00:35:23.991 15:16:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:23.991 15:16:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:24.251 15:16:17 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:24.251 "subsystems": [ 00:35:24.251 { 00:35:24.251 "subsystem": "keyring", 00:35:24.251 "config": [ 00:35:24.251 { 00:35:24.251 "method": "keyring_file_add_key", 00:35:24.251 "params": { 00:35:24.251 "name": "key0", 00:35:24.251 "path": "/tmp/tmp.5wtfJNMZjO" 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "keyring_file_add_key", 00:35:24.251 "params": { 00:35:24.251 "name": "key1", 00:35:24.251 "path": "/tmp/tmp.P21bC5VhWp" 00:35:24.251 } 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "iobuf", 00:35:24.251 "config": [ 00:35:24.251 { 00:35:24.251 "method": "iobuf_set_options", 00:35:24.251 "params": { 00:35:24.251 "small_pool_count": 8192, 00:35:24.251 "large_pool_count": 1024, 00:35:24.251 "small_bufsize": 8192, 00:35:24.251 "large_bufsize": 135168, 00:35:24.251 "enable_numa": false 00:35:24.251 } 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "sock", 00:35:24.251 "config": [ 00:35:24.251 { 00:35:24.251 "method": "sock_set_default_impl", 00:35:24.251 "params": { 00:35:24.251 "impl_name": "posix" 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "sock_impl_set_options", 00:35:24.251 "params": { 00:35:24.251 "impl_name": "ssl", 00:35:24.251 "recv_buf_size": 4096, 00:35:24.251 "send_buf_size": 4096, 00:35:24.251 "enable_recv_pipe": true, 00:35:24.251 "enable_quickack": false, 00:35:24.251 "enable_placement_id": 0, 00:35:24.251 "enable_zerocopy_send_server": true, 00:35:24.251 "enable_zerocopy_send_client": false, 00:35:24.251 "zerocopy_threshold": 0, 00:35:24.251 "tls_version": 0, 00:35:24.251 "enable_ktls": false 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "sock_impl_set_options", 00:35:24.251 "params": { 00:35:24.251 "impl_name": "posix", 00:35:24.251 "recv_buf_size": 2097152, 00:35:24.251 "send_buf_size": 2097152, 00:35:24.251 "enable_recv_pipe": true, 00:35:24.251 "enable_quickack": false, 00:35:24.251 "enable_placement_id": 0, 00:35:24.251 "enable_zerocopy_send_server": true, 00:35:24.251 "enable_zerocopy_send_client": false, 00:35:24.251 "zerocopy_threshold": 0, 00:35:24.251 "tls_version": 0, 00:35:24.251 "enable_ktls": false 00:35:24.251 } 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "vmd", 00:35:24.251 "config": [] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "accel", 00:35:24.251 "config": [ 00:35:24.251 { 00:35:24.251 "method": "accel_set_options", 00:35:24.251 "params": { 00:35:24.251 "small_cache_size": 128, 00:35:24.251 "large_cache_size": 16, 00:35:24.251 "task_count": 2048, 00:35:24.251 "sequence_count": 2048, 00:35:24.251 "buf_count": 2048 00:35:24.251 } 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "bdev", 00:35:24.251 "config": [ 00:35:24.251 { 00:35:24.251 "method": "bdev_set_options", 00:35:24.251 "params": { 00:35:24.251 "bdev_io_pool_size": 65535, 00:35:24.251 "bdev_io_cache_size": 256, 00:35:24.251 "bdev_auto_examine": true, 00:35:24.251 "iobuf_small_cache_size": 128, 00:35:24.251 "iobuf_large_cache_size": 16 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_raid_set_options", 00:35:24.251 "params": { 00:35:24.251 "process_window_size_kb": 1024, 00:35:24.251 "process_max_bandwidth_mb_sec": 0 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_iscsi_set_options", 00:35:24.251 "params": { 00:35:24.251 "timeout_sec": 30 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_nvme_set_options", 00:35:24.251 "params": { 00:35:24.251 "action_on_timeout": "none", 00:35:24.251 "timeout_us": 0, 00:35:24.251 "timeout_admin_us": 0, 00:35:24.251 "keep_alive_timeout_ms": 10000, 00:35:24.251 "arbitration_burst": 0, 00:35:24.251 "low_priority_weight": 0, 00:35:24.251 "medium_priority_weight": 0, 00:35:24.251 "high_priority_weight": 0, 00:35:24.251 "nvme_adminq_poll_period_us": 10000, 00:35:24.251 "nvme_ioq_poll_period_us": 0, 00:35:24.251 "io_queue_requests": 512, 00:35:24.251 "delay_cmd_submit": true, 00:35:24.251 "transport_retry_count": 4, 00:35:24.251 "bdev_retry_count": 3, 00:35:24.251 "transport_ack_timeout": 0, 00:35:24.251 "ctrlr_loss_timeout_sec": 0, 00:35:24.251 "reconnect_delay_sec": 0, 00:35:24.251 "fast_io_fail_timeout_sec": 0, 00:35:24.251 "disable_auto_failback": false, 00:35:24.251 "generate_uuids": false, 00:35:24.251 "transport_tos": 0, 00:35:24.251 "nvme_error_stat": false, 00:35:24.251 "rdma_srq_size": 0, 00:35:24.251 "io_path_stat": false, 00:35:24.251 "allow_accel_sequence": false, 00:35:24.251 "rdma_max_cq_size": 0, 00:35:24.251 "rdma_cm_event_timeout_ms": 0, 00:35:24.251 "dhchap_digests": [ 00:35:24.251 "sha256", 00:35:24.251 "sha384", 00:35:24.251 "sha512" 00:35:24.251 ], 00:35:24.251 "dhchap_dhgroups": [ 00:35:24.251 "null", 00:35:24.251 "ffdhe2048", 00:35:24.251 "ffdhe3072", 00:35:24.251 "ffdhe4096", 00:35:24.251 "ffdhe6144", 00:35:24.251 "ffdhe8192" 00:35:24.251 ], 00:35:24.251 "rdma_umr_per_io": false 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_nvme_attach_controller", 00:35:24.251 "params": { 00:35:24.251 "name": "nvme0", 00:35:24.251 "trtype": "TCP", 00:35:24.251 "adrfam": "IPv4", 00:35:24.251 "traddr": "127.0.0.1", 00:35:24.251 "trsvcid": "4420", 00:35:24.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.251 "prchk_reftag": false, 00:35:24.251 "prchk_guard": false, 00:35:24.251 "ctrlr_loss_timeout_sec": 0, 00:35:24.251 "reconnect_delay_sec": 0, 00:35:24.251 "fast_io_fail_timeout_sec": 0, 00:35:24.251 "psk": "key0", 00:35:24.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.251 "hdgst": false, 00:35:24.251 "ddgst": false, 00:35:24.251 "multipath": "multipath" 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_nvme_set_hotplug", 00:35:24.251 "params": { 00:35:24.251 "period_us": 100000, 00:35:24.251 "enable": false 00:35:24.251 } 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "method": "bdev_wait_for_examine" 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }, 00:35:24.251 { 00:35:24.251 "subsystem": "nbd", 00:35:24.251 "config": [] 00:35:24.251 } 00:35:24.251 ] 00:35:24.251 }' 00:35:24.251 15:16:17 keyring_file -- keyring/file.sh@115 -- # killprocess 3391877 00:35:24.251 15:16:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3391877 ']' 00:35:24.251 15:16:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3391877 00:35:24.251 15:16:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:24.251 15:16:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:24.251 15:16:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391877 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391877' 00:35:24.511 killing process with pid 3391877 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@973 -- # kill 3391877 00:35:24.511 Received shutdown signal, test time was about 1.000000 seconds 00:35:24.511 00:35:24.511 Latency(us) 00:35:24.511 [2024-12-11T14:16:17.559Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:24.511 [2024-12-11T14:16:17.559Z] =================================================================================================================== 00:35:24.511 [2024-12-11T14:16:17.559Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@978 -- # wait 3391877 00:35:24.511 15:16:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=3393390 00:35:24.511 15:16:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3393390 /var/tmp/bperf.sock 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3393390 ']' 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:24.511 15:16:17 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.511 15:16:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:24.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:24.511 15:16:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:24.511 "subsystems": [ 00:35:24.511 { 00:35:24.511 "subsystem": "keyring", 00:35:24.511 "config": [ 00:35:24.511 { 00:35:24.511 "method": "keyring_file_add_key", 00:35:24.511 "params": { 00:35:24.511 "name": "key0", 00:35:24.511 "path": "/tmp/tmp.5wtfJNMZjO" 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "keyring_file_add_key", 00:35:24.512 "params": { 00:35:24.512 "name": "key1", 00:35:24.512 "path": "/tmp/tmp.P21bC5VhWp" 00:35:24.512 } 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "iobuf", 00:35:24.512 "config": [ 00:35:24.512 { 00:35:24.512 "method": "iobuf_set_options", 00:35:24.512 "params": { 00:35:24.512 "small_pool_count": 8192, 00:35:24.512 "large_pool_count": 1024, 00:35:24.512 "small_bufsize": 8192, 00:35:24.512 "large_bufsize": 135168, 00:35:24.512 "enable_numa": false 00:35:24.512 } 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "sock", 00:35:24.512 "config": [ 00:35:24.512 { 00:35:24.512 "method": "sock_set_default_impl", 00:35:24.512 "params": { 00:35:24.512 "impl_name": "posix" 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "sock_impl_set_options", 00:35:24.512 "params": { 00:35:24.512 "impl_name": "ssl", 00:35:24.512 "recv_buf_size": 4096, 00:35:24.512 "send_buf_size": 4096, 00:35:24.512 "enable_recv_pipe": true, 00:35:24.512 "enable_quickack": false, 00:35:24.512 "enable_placement_id": 0, 00:35:24.512 "enable_zerocopy_send_server": true, 00:35:24.512 "enable_zerocopy_send_client": false, 00:35:24.512 "zerocopy_threshold": 0, 00:35:24.512 "tls_version": 0, 00:35:24.512 "enable_ktls": false 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "sock_impl_set_options", 00:35:24.512 "params": { 00:35:24.512 "impl_name": "posix", 00:35:24.512 "recv_buf_size": 2097152, 00:35:24.512 "send_buf_size": 2097152, 00:35:24.512 "enable_recv_pipe": true, 00:35:24.512 "enable_quickack": false, 00:35:24.512 "enable_placement_id": 0, 00:35:24.512 "enable_zerocopy_send_server": true, 00:35:24.512 "enable_zerocopy_send_client": false, 00:35:24.512 "zerocopy_threshold": 0, 00:35:24.512 "tls_version": 0, 00:35:24.512 "enable_ktls": false 00:35:24.512 } 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "vmd", 00:35:24.512 "config": [] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "accel", 00:35:24.512 "config": [ 00:35:24.512 { 00:35:24.512 "method": "accel_set_options", 00:35:24.512 "params": { 00:35:24.512 "small_cache_size": 128, 00:35:24.512 "large_cache_size": 16, 00:35:24.512 "task_count": 2048, 00:35:24.512 "sequence_count": 2048, 00:35:24.512 "buf_count": 2048 00:35:24.512 } 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "bdev", 00:35:24.512 "config": [ 00:35:24.512 { 00:35:24.512 "method": "bdev_set_options", 00:35:24.512 "params": { 00:35:24.512 "bdev_io_pool_size": 65535, 00:35:24.512 "bdev_io_cache_size": 256, 00:35:24.512 "bdev_auto_examine": true, 00:35:24.512 "iobuf_small_cache_size": 128, 00:35:24.512 "iobuf_large_cache_size": 16 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_raid_set_options", 00:35:24.512 "params": { 00:35:24.512 "process_window_size_kb": 1024, 00:35:24.512 "process_max_bandwidth_mb_sec": 0 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_iscsi_set_options", 00:35:24.512 "params": { 00:35:24.512 "timeout_sec": 30 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_nvme_set_options", 00:35:24.512 "params": { 00:35:24.512 "action_on_timeout": "none", 00:35:24.512 "timeout_us": 0, 00:35:24.512 "timeout_admin_us": 0, 00:35:24.512 "keep_alive_timeout_ms": 10000, 00:35:24.512 "arbitration_burst": 0, 00:35:24.512 "low_priority_weight": 0, 00:35:24.512 "medium_priority_weight": 0, 00:35:24.512 "high_priority_weight": 0, 00:35:24.512 "nvme_adminq_poll_period_us": 10000, 00:35:24.512 "nvme_ioq_poll_period_us": 0, 00:35:24.512 "io_queue_requests": 512, 00:35:24.512 "delay_cmd_submit": true, 00:35:24.512 "transport_retry_count": 4, 00:35:24.512 "bdev_retry_count": 3, 00:35:24.512 "transport_ack_timeout": 0, 00:35:24.512 "ctrlr_loss_timeout_sec": 0, 00:35:24.512 "reconnect_delay_sec": 0, 00:35:24.512 "fast_io_fail_timeout_sec": 0, 00:35:24.512 "disable_auto_failback": false, 00:35:24.512 "generate_uuids": false, 00:35:24.512 "transport_tos": 0, 00:35:24.512 "nvme_error_stat": false, 00:35:24.512 "rdma_srq_size": 0, 00:35:24.512 "io_path_stat": false, 00:35:24.512 "allow_accel_sequence": false, 00:35:24.512 "rdma_max_cq_size": 0, 00:35:24.512 "rdma_cm_event_timeout_ms": 0, 00:35:24.512 "dhchap_digests": [ 00:35:24.512 "sha256", 00:35:24.512 "sha384", 00:35:24.512 "sha512" 00:35:24.512 ], 00:35:24.512 "dhchap_dhgroups": [ 00:35:24.512 "null", 00:35:24.512 "ffdhe2048", 00:35:24.512 "ffdhe3072", 00:35:24.512 "ffdhe4096", 00:35:24.512 "ffdhe6144", 00:35:24.512 "ffdhe8192" 00:35:24.512 ], 00:35:24.512 "rdma_umr_per_io": false 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_nvme_attach_controller", 00:35:24.512 "params": { 00:35:24.512 "name": "nvme0", 00:35:24.512 "trtype": "TCP", 00:35:24.512 "adrfam": "IPv4", 00:35:24.512 "traddr": "127.0.0.1", 00:35:24.512 "trsvcid": "4420", 00:35:24.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:24.512 "prchk_reftag": false, 00:35:24.512 "prchk_guard": false, 00:35:24.512 "ctrlr_loss_timeout_sec": 0, 00:35:24.512 "reconnect_delay_sec": 0, 00:35:24.512 "fast_io_fail_timeout_sec": 0, 00:35:24.512 "psk": "key0", 00:35:24.512 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:24.512 "hdgst": false, 00:35:24.512 "ddgst": false, 00:35:24.512 "multipath": "multipath" 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_nvme_set_hotplug", 00:35:24.512 "params": { 00:35:24.512 "period_us": 100000, 00:35:24.512 "enable": false 00:35:24.512 } 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "method": "bdev_wait_for_examine" 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }, 00:35:24.512 { 00:35:24.512 "subsystem": "nbd", 00:35:24.512 "config": [] 00:35:24.512 } 00:35:24.512 ] 00:35:24.512 }' 00:35:24.512 15:16:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.512 15:16:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:24.512 [2024-12-11 15:16:17.519400] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:35:24.512 [2024-12-11 15:16:17.519449] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393390 ] 00:35:24.772 [2024-12-11 15:16:17.595087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.772 [2024-12-11 15:16:17.636744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:24.772 [2024-12-11 15:16:17.798431] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:25.340 15:16:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.340 15:16:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:25.340 15:16:18 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:25.340 15:16:18 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:25.340 15:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.599 15:16:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:25.599 15:16:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:25.599 15:16:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:25.599 15:16:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:25.599 15:16:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:25.599 15:16:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:25.599 15:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.858 15:16:18 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:25.858 15:16:18 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:25.858 15:16:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:25.858 15:16:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:25.858 15:16:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:25.858 15:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:25.858 15:16:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:26.117 15:16:18 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:26.117 15:16:18 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:26.117 15:16:18 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:26.117 15:16:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:26.376 15:16:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:26.376 15:16:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:26.376 15:16:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.5wtfJNMZjO /tmp/tmp.P21bC5VhWp 00:35:26.376 15:16:19 keyring_file -- keyring/file.sh@20 -- # killprocess 3393390 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3393390 ']' 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3393390 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393390 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393390' 00:35:26.376 killing process with pid 3393390 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@973 -- # kill 3393390 00:35:26.376 Received shutdown signal, test time was about 1.000000 seconds 00:35:26.376 00:35:26.376 Latency(us) 00:35:26.376 [2024-12-11T14:16:19.424Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:26.376 [2024-12-11T14:16:19.424Z] =================================================================================================================== 00:35:26.376 [2024-12-11T14:16:19.424Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@978 -- # wait 3393390 00:35:26.376 15:16:19 keyring_file -- keyring/file.sh@21 -- # killprocess 3391867 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3391867 ']' 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3391867 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:26.376 15:16:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3391867 00:35:26.635 15:16:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:26.636 15:16:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:26.636 15:16:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3391867' 00:35:26.636 killing process with pid 3391867 00:35:26.636 15:16:19 keyring_file -- common/autotest_common.sh@973 -- # kill 3391867 00:35:26.636 15:16:19 keyring_file -- common/autotest_common.sh@978 -- # wait 3391867 00:35:26.895 00:35:26.895 real 0m11.917s 00:35:26.895 user 0m29.679s 00:35:26.895 sys 0m2.674s 00:35:26.895 15:16:19 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.895 15:16:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:26.895 ************************************ 00:35:26.895 END TEST keyring_file 00:35:26.895 ************************************ 00:35:26.895 15:16:19 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:26.895 15:16:19 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:35:26.895 15:16:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:26.895 15:16:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.895 15:16:19 -- common/autotest_common.sh@10 -- # set +x 00:35:26.895 ************************************ 00:35:26.895 START TEST keyring_linux 00:35:26.895 ************************************ 00:35:26.895 15:16:19 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:35:26.895 Joined session keyring: 1008381129 00:35:26.895 * Looking for test storage... 00:35:26.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:35:26.895 15:16:19 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:26.895 15:16:19 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:35:26.895 15:16:19 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:27.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.155 --rc genhtml_branch_coverage=1 00:35:27.155 --rc genhtml_function_coverage=1 00:35:27.155 --rc genhtml_legend=1 00:35:27.155 --rc geninfo_all_blocks=1 00:35:27.155 --rc geninfo_unexecuted_blocks=1 00:35:27.155 00:35:27.155 ' 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:27.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.155 --rc genhtml_branch_coverage=1 00:35:27.155 --rc genhtml_function_coverage=1 00:35:27.155 --rc genhtml_legend=1 00:35:27.155 --rc geninfo_all_blocks=1 00:35:27.155 --rc geninfo_unexecuted_blocks=1 00:35:27.155 00:35:27.155 ' 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:27.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.155 --rc genhtml_branch_coverage=1 00:35:27.155 --rc genhtml_function_coverage=1 00:35:27.155 --rc genhtml_legend=1 00:35:27.155 --rc geninfo_all_blocks=1 00:35:27.155 --rc geninfo_unexecuted_blocks=1 00:35:27.155 00:35:27.155 ' 00:35:27.155 15:16:19 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:27.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.155 --rc genhtml_branch_coverage=1 00:35:27.155 --rc genhtml_function_coverage=1 00:35:27.155 --rc genhtml_legend=1 00:35:27.155 --rc geninfo_all_blocks=1 00:35:27.155 --rc geninfo_unexecuted_blocks=1 00:35:27.155 00:35:27.155 ' 00:35:27.155 15:16:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:35:27.155 15:16:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:27.155 15:16:19 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:27.155 15:16:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.155 15:16:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.155 15:16:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.155 15:16:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:27.155 15:16:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:27.155 15:16:19 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:27.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:27.156 15:16:19 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:27.156 15:16:19 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:27.156 15:16:19 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:27.156 /tmp/:spdk-test:key0 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:35:27.156 15:16:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:27.156 15:16:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:27.156 /tmp/:spdk-test:key1 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3393948 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3393948 00:35:27.156 15:16:20 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3393948 ']' 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.156 15:16:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.156 [2024-12-11 15:16:20.148649] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:35:27.156 [2024-12-11 15:16:20.148701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393948 ] 00:35:27.415 [2024-12-11 15:16:20.224882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.416 [2024-12-11 15:16:20.265155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.675 [2024-12-11 15:16:20.482220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.675 null0 00:35:27.675 [2024-12-11 15:16:20.514264] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:27.675 [2024-12-11 15:16:20.514636] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:27.675 587006467 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:27.675 1031159934 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3393953 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3393953 /var/tmp/bperf.sock 00:35:27.675 15:16:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3393953 ']' 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:27.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.675 15:16:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:27.675 [2024-12-11 15:16:20.586241] Starting SPDK v25.01-pre git sha1 4dfeb7f95 / DPDK 24.03.0 initialization... 00:35:27.675 [2024-12-11 15:16:20.586287] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3393953 ] 00:35:27.675 [2024-12-11 15:16:20.661592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:27.675 [2024-12-11 15:16:20.703090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:27.955 15:16:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:27.955 15:16:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:27.955 15:16:20 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:27.955 15:16:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:27.955 15:16:20 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:27.955 15:16:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:28.248 15:16:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:28.248 15:16:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:28.512 [2024-12-11 15:16:21.365356] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:28.512 nvme0n1 00:35:28.512 15:16:21 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:28.512 15:16:21 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:28.512 15:16:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:28.512 15:16:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:28.512 15:16:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:28.512 15:16:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:28.770 15:16:21 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:28.770 15:16:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:28.770 15:16:21 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:28.770 15:16:21 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:28.770 15:16:21 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:28.770 15:16:21 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:28.771 15:16:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@25 -- # sn=587006467 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 587006467 == \5\8\7\0\0\6\4\6\7 ]] 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 587006467 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:29.028 15:16:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.028 Running I/O for 1 seconds... 00:35:29.964 21159.00 IOPS, 82.65 MiB/s 00:35:29.964 Latency(us) 00:35:29.964 [2024-12-11T14:16:23.012Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:29.964 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:29.964 nvme0n1 : 1.01 21160.27 82.66 0.00 0.00 6028.82 5014.93 14702.86 00:35:29.964 [2024-12-11T14:16:23.012Z] =================================================================================================================== 00:35:29.964 [2024-12-11T14:16:23.012Z] Total : 21160.27 82.66 0.00 0.00 6028.82 5014.93 14702.86 00:35:29.964 { 00:35:29.964 "results": [ 00:35:29.964 { 00:35:29.964 "job": "nvme0n1", 00:35:29.964 "core_mask": "0x2", 00:35:29.964 "workload": "randread", 00:35:29.964 "status": "finished", 00:35:29.964 "queue_depth": 128, 00:35:29.964 "io_size": 4096, 00:35:29.964 "runtime": 1.005989, 00:35:29.964 "iops": 21160.271136165506, 00:35:29.964 "mibps": 82.6573091256465, 00:35:29.964 "io_failed": 0, 00:35:29.964 "io_timeout": 0, 00:35:29.964 "avg_latency_us": 6028.815796046168, 00:35:29.964 "min_latency_us": 5014.928695652174, 00:35:29.964 "max_latency_us": 14702.859130434783 00:35:29.964 } 00:35:29.964 ], 00:35:29.964 "core_count": 1 00:35:29.964 } 00:35:29.964 15:16:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:29.964 15:16:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:30.223 15:16:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:30.223 15:16:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:30.223 15:16:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:30.223 15:16:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:30.223 15:16:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:30.223 15:16:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:30.482 15:16:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:30.482 15:16:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:30.482 15:16:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:30.482 15:16:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:30.482 15:16:23 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:30.482 15:16:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:30.742 [2024-12-11 15:16:23.585230] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:30.742 [2024-12-11 15:16:23.585991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bc20 (107): Transport endpoint is not connected 00:35:30.742 [2024-12-11 15:16:23.586985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5bc20 (9): Bad file descriptor 00:35:30.742 [2024-12-11 15:16:23.587987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:30.742 [2024-12-11 15:16:23.587996] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:30.742 [2024-12-11 15:16:23.588003] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:30.742 [2024-12-11 15:16:23.588012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:30.742 request: 00:35:30.742 { 00:35:30.742 "name": "nvme0", 00:35:30.742 "trtype": "tcp", 00:35:30.742 "traddr": "127.0.0.1", 00:35:30.742 "adrfam": "ipv4", 00:35:30.742 "trsvcid": "4420", 00:35:30.742 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:30.742 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:30.742 "prchk_reftag": false, 00:35:30.742 "prchk_guard": false, 00:35:30.742 "hdgst": false, 00:35:30.742 "ddgst": false, 00:35:30.742 "psk": ":spdk-test:key1", 00:35:30.742 "allow_unrecognized_csi": false, 00:35:30.742 "method": "bdev_nvme_attach_controller", 00:35:30.742 "req_id": 1 00:35:30.742 } 00:35:30.742 Got JSON-RPC error response 00:35:30.742 response: 00:35:30.742 { 00:35:30.742 "code": -5, 00:35:30.742 "message": "Input/output error" 00:35:30.742 } 00:35:30.742 15:16:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:30.742 15:16:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:30.742 15:16:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:30.742 15:16:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@33 -- # sn=587006467 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 587006467 00:35:30.742 1 links removed 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@33 -- # sn=1031159934 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1031159934 00:35:30.742 1 links removed 00:35:30.742 15:16:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3393953 00:35:30.742 15:16:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3393953 ']' 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3393953 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393953 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393953' 00:35:30.743 killing process with pid 3393953 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 3393953 00:35:30.743 Received shutdown signal, test time was about 1.000000 seconds 00:35:30.743 00:35:30.743 Latency(us) 00:35:30.743 [2024-12-11T14:16:23.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:30.743 [2024-12-11T14:16:23.791Z] =================================================================================================================== 00:35:30.743 [2024-12-11T14:16:23.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:30.743 15:16:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 3393953 00:35:31.002 15:16:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3393948 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3393948 ']' 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3393948 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393948 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393948' 00:35:31.002 killing process with pid 3393948 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 3393948 00:35:31.002 15:16:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 3393948 00:35:31.261 00:35:31.261 real 0m4.410s 00:35:31.261 user 0m8.381s 00:35:31.261 sys 0m1.413s 00:35:31.261 15:16:24 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.261 15:16:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:31.261 ************************************ 00:35:31.261 END TEST keyring_linux 00:35:31.261 ************************************ 00:35:31.261 15:16:24 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:31.261 15:16:24 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:31.261 15:16:24 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:31.261 15:16:24 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:31.261 15:16:24 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:31.262 15:16:24 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:31.262 15:16:24 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:31.262 15:16:24 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:31.262 15:16:24 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:31.262 15:16:24 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:31.262 15:16:24 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:31.262 15:16:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.262 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:35:31.262 15:16:24 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:31.262 15:16:24 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:31.262 15:16:24 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:31.262 15:16:24 -- common/autotest_common.sh@10 -- # set +x 00:35:36.537 INFO: APP EXITING 00:35:36.537 INFO: killing all VMs 00:35:36.537 INFO: killing vhost app 00:35:36.537 INFO: EXIT DONE 00:35:39.074 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:39.074 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:39.074 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:39.074 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:39.074 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:39.074 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:39.075 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:39.334 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:39.334 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:39.334 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:39.334 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:42.627 Cleaning 00:35:42.627 Removing: /var/run/dpdk/spdk0/config 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:42.627 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:42.627 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:42.627 Removing: /var/run/dpdk/spdk1/config 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:42.627 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:42.627 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:42.627 Removing: /var/run/dpdk/spdk2/config 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:42.627 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:42.627 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:42.627 Removing: /var/run/dpdk/spdk3/config 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:42.627 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:42.627 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:42.627 Removing: /var/run/dpdk/spdk4/config 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:42.627 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:42.627 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:42.627 Removing: /dev/shm/bdev_svc_trace.1 00:35:42.627 Removing: /dev/shm/nvmf_trace.0 00:35:42.627 Removing: /dev/shm/spdk_tgt_trace.pid2915265 00:35:42.627 Removing: /var/run/dpdk/spdk0 00:35:42.627 Removing: /var/run/dpdk/spdk1 00:35:42.627 Removing: /var/run/dpdk/spdk2 00:35:42.627 Removing: /var/run/dpdk/spdk3 00:35:42.627 Removing: /var/run/dpdk/spdk4 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2913113 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2914178 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2915265 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2915900 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2916846 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2916910 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2917958 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2918066 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2918409 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2920065 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2921721 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2922022 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2922311 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2922613 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2922809 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2922985 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2923199 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2923487 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2924224 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2927331 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2927641 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2927853 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2927957 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2928451 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2928461 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2928954 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2928961 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2929218 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2929289 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2929492 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2929653 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2930070 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2930317 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2930612 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2934338 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2938800 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2949079 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2949585 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2953868 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2954155 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2958451 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2964387 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2967613 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2977827 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2986753 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2988590 00:35:42.627 Removing: /var/run/dpdk/spdk_pid2989519 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3006376 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3010442 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3055958 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3061363 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3067641 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3074115 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3074135 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3074912 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3075747 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3076663 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3077195 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3077350 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3077584 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3077595 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3077616 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3078519 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3079428 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3080345 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3080811 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3080832 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3081172 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3082280 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3083269 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3091380 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3120732 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3125243 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3126850 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3128683 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3128726 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3128934 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3129168 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3129668 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3131374 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3132268 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3132704 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3134871 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3135361 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3135905 00:35:42.627 Removing: /var/run/dpdk/spdk_pid3140658 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3146044 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3146045 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3146046 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3150052 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3158506 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3162426 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3168416 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3169721 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3171249 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3172604 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3177303 00:35:42.628 Removing: /var/run/dpdk/spdk_pid3181643 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3185635 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3193564 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3193566 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3198274 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3198451 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3198597 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3198981 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3199185 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3203685 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3204139 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3208607 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3211354 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3216591 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3221864 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3230645 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3237752 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3237759 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3256924 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3257398 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3258068 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3258555 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3259296 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3259778 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3260386 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3260937 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3265026 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3265358 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3271371 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3271567 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3276851 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3281054 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3291321 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3291985 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3296164 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3296490 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3300529 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3306173 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3308734 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3318777 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3327558 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3329162 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3330205 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3346738 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3350561 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3353249 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3360985 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3361082 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3366115 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3367985 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3369951 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3371209 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3373185 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3374378 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3383561 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3384172 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3384631 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3386902 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3387370 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3387901 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3391867 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3391877 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3393390 00:35:42.887 Removing: /var/run/dpdk/spdk_pid3393948 00:35:43.146 Removing: /var/run/dpdk/spdk_pid3393953 00:35:43.146 Clean 00:35:43.146 15:16:36 -- common/autotest_common.sh@1453 -- # return 0 00:35:43.146 15:16:36 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:43.146 15:16:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.146 15:16:36 -- common/autotest_common.sh@10 -- # set +x 00:35:43.147 15:16:36 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:43.147 15:16:36 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.147 15:16:36 -- common/autotest_common.sh@10 -- # set +x 00:35:43.147 15:16:36 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:35:43.147 15:16:36 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log ]] 00:35:43.147 15:16:36 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log 00:35:43.147 15:16:36 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:43.147 15:16:36 -- spdk/autotest.sh@398 -- # hostname 00:35:43.147 15:16:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info 00:35:43.406 geninfo: WARNING: invalid characters removed from testname! 00:36:05.346 15:16:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:06.725 15:16:59 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:08.632 15:17:01 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:10.537 15:17:03 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:12.443 15:17:05 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:14.345 15:17:07 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:16.247 15:17:09 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:16.247 15:17:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:16.247 15:17:09 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt ]] 00:36:16.247 15:17:09 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:16.247 15:17:09 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:16.247 15:17:09 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:36:16.247 + [[ -n 2834946 ]] 00:36:16.247 + sudo kill 2834946 00:36:16.257 [Pipeline] } 00:36:16.272 [Pipeline] // stage 00:36:16.277 [Pipeline] } 00:36:16.290 [Pipeline] // timeout 00:36:16.296 [Pipeline] } 00:36:16.309 [Pipeline] // catchError 00:36:16.314 [Pipeline] } 00:36:16.328 [Pipeline] // wrap 00:36:16.334 [Pipeline] } 00:36:16.346 [Pipeline] // catchError 00:36:16.355 [Pipeline] stage 00:36:16.357 [Pipeline] { (Epilogue) 00:36:16.370 [Pipeline] catchError 00:36:16.371 [Pipeline] { 00:36:16.384 [Pipeline] echo 00:36:16.385 Cleanup processes 00:36:16.391 [Pipeline] sh 00:36:16.678 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:36:16.678 3404575 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:36:16.691 [Pipeline] sh 00:36:16.976 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:36:16.976 ++ grep -v 'sudo pgrep' 00:36:16.976 ++ awk '{print $1}' 00:36:16.976 + sudo kill -9 00:36:16.976 + true 00:36:16.987 [Pipeline] sh 00:36:17.271 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:29.604 [Pipeline] sh 00:36:29.888 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:29.888 Artifacts sizes are good 00:36:29.903 [Pipeline] archiveArtifacts 00:36:29.915 Archiving artifacts 00:36:30.035 [Pipeline] sh 00:36:30.319 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:36:30.333 [Pipeline] cleanWs 00:36:30.343 [WS-CLEANUP] Deleting project workspace... 00:36:30.343 [WS-CLEANUP] Deferred wipeout is used... 00:36:30.349 [WS-CLEANUP] done 00:36:30.351 [Pipeline] } 00:36:30.367 [Pipeline] // catchError 00:36:30.378 [Pipeline] sh 00:36:30.681 + logger -p user.info -t JENKINS-CI 00:36:30.689 [Pipeline] } 00:36:30.702 [Pipeline] // stage 00:36:30.707 [Pipeline] } 00:36:30.720 [Pipeline] // node 00:36:30.724 [Pipeline] End of Pipeline 00:36:30.767 Finished: SUCCESS